Only bufferevent_sock have evbuffer_freeze()/evbuffer_unfreeze() & ctrl ops, so
we don't need to fix other bufferevents (be_pair doesn't have ctrl op).
Found during draining buffers in http layer, and hence 501-not-implemented
error in regress http/.. (with some custom hacking).
Since otherwise we can have nasty bugs with part of previous *request* in
current *request* and hence some parsing errors.
And now we have failures:
http/non_lingering_close: [forking] [err] ../http.c:1326: Assertion !evbuffer_drain(tmp, -1) failed in ../http.c
- add_compiler_flags() must accept array IOW just ARGN will be enoough
- add_compiler_flags() called with variable name instead of it's value
P.S. and fix some alignments issues
P.P.S. more cmake issues expected since now CFLAGS actually works
P.P.P.S. some issues with cmake cache is possible, so just reset it
According to https://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html:
-Wswitch-enum
Warn whenever a switch statement has an index of enumerated type and lacks a
case for one or more of the named codes of that enumeration. case labels
outside the enumeration range also provoke warnings when this option is used.
The only difference between -Wswitch and this option is that this option
gives a warning about an omitted enumeration code even if there is a *default
label*.
In short: now `evhttp_set_max_body_size()` is browser-friendly.
This patch set implements lingering close (something like nginx have), this
will make web-server more graceful for currently existing web browsers, so:
- without EVHTTP_CON_LINGERING_CLOSE/EVHTTP_SERVER_LINGERING_CLOSE
before: it will read min(max_body_size, Content-Length)
- with EVHTTP_CON_LINGERING_CLOSE/EVHTTP_SERVER_LINGERING_CLOSE:
it will read max(max_body_size, Content-Length), and web browsers will show
"413 Request Entity Too Large".
Also it fixes a bug on client-side with non-lingering close web-servers
(EVHTTP_CON_READ_ON_WRITE_ERROR), found during implementing web-server
lingering close.
* more-graceful-http-v10:
test: http/lingering_close: cover EVHTTP_SERVER_LINGERING_CLOSE
test: http/non_lingering_close: cover ~EVHTTP_SERVER_LINGERING_CLOSE
test: http/*: update expected HTTP codes for body exceeds `max_body_size`
http: take EVHTTP_CON_LINGERING_CLOSE into account for "Expect: 100-Continue"
test: http/data_length_constrains: set EVHTTP_CON_READ_ON_WRITE_ERROR
http: lingering close (like nginx have) for entity-too-large
http: read server response even after server closed the connection
test: increase buffer size for http/data_length_constraints to trigger EPIPE
Fixes: #321
Covered-by: http/non_lingering_close
Covered-by: http/lingering_close
Also since after this patch code became more generic, we now respond with
HTTP_ENTITYTOOLARGE even without "Expect: 100-Continue", which is correct by
RFC.
Refs: #321
v2: remove EVHTTP_CON_ABOUT_TO_CLOSE
By lingering close I mean something what nginx have for this name, by this term
I mean that we need to read all the body even if it's size greater then
`max_body_size`, otherwise browsers on win32 (including chrome) failed read the
http status - entity-too-large (while on linux chrome for instance are good),
and also this includes badly written http clients.
Refs: #321
v2: do this only under EVHTTP_SERVER_LINGERING_CLOSE
Otherwise if we will try to write more data than server can accept
(see `evhttp_set_max_body_size()` for libevent server) we will get `EPIPE` and
will not try to read server's response which must contain 400 error for now
(which is not strictly correct though, it must 413).
```
$ strace regress --no-fork http/data_length_constraints
...
connect(10, {sa_family=AF_INET, sin_port=htons(43988), sin_addr=inet_addr("127.0.0.1")}, 16) = -1 EINPROGRESS (Operation now in progress)
...
writev(10, [{"POST / HTTP/1.1\r\nHost: somehost\r"..., 60}, {"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"..., 16324}], 2) = 16384
epoll_wait(5, [{EPOLLOUT, {u32=10, u64=10}}, {EPOLLIN, {u32=11, u64=11}}], 32, 50000) = 2
writev(10, [{"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"..., 16384}], 1) = 16384
ioctl(11, FIONREAD, [32768]) = 0
readv(11, [{"POST / HTTP/1.1\r\nHost: somehost\r"..., 4096}], 1) = 4096
epoll_ctl(5, EPOLL_CTL_DEL, 11, 0x7fff09d41e50) = 0
epoll_ctl(5, EPOLL_CTL_ADD, 11, {EPOLLOUT, {u32=11, u64=11}}) = 0
epoll_wait(5, [{EPOLLOUT, {u32=10, u64=10}}, {EPOLLOUT, {u32=11, u64=11}}], 32, 50000) = 2
writev(10, [{"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"..., 16384}], 1) = 16384
writev(11, [{"HTTP/1.1 400 Bad Request\r\nConten"..., 129}, {"<HTML><HEAD>\n<TITLE>400 Bad Requ"..., 94}], 2) = 223
epoll_ctl(5, EPOLL_CTL_DEL, 11, 0x7fff09d42080) = 0
shutdown(11, SHUT_WR) = 0
close(11) = 0
epoll_wait(5, [{EPOLLOUT|EPOLLERR|EPOLLHUP, {u32=10, u64=10}}], 32, 50000) = 1
writev(10, [{"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"..., 16384}], 1) = -1 EPIPE (Broken pipe)
--- SIGPIPE {si_signo=SIGPIPE, si_code=SI_USER, si_pid=13954, si_uid=1000} ---
epoll_ctl(5, EPOLL_CTL_DEL, 10, 0x7fff09d42010) = 0
shutdown(10, SHUT_WR) = -1 ENOTCONN (Transport endpoint is not connected)
close(10) = 0
write(1, "\n FAIL ../test/regress_http.c:3"..., 37
```
Careful reader can ask why it send error even when it didn't read
`evcon->max_body_size`, and the answer will be checks for `evcon->max_body_size
against `Content-Length` header, which contains ~8MB (-2 bytes).
And also if we will not drain the output buffer than we will send buffer that
we didn't send in previous request and instead of sending method via
`evhttp_make_header()`.
Fixes: http/data_length_constraints
Refs: #321
v2: do this only under EVHTTP_CON_READ_ON_WRITE_ERROR flag
With greater buffer it can't be written with one writev(2), and hence we can
trigger more tricky cases, like calling writecb/readcb more then once.
Refs: #321
And we can't make them continuous, since the latest is a public API, and
otherwise we will break binary compatibility.
Also extra check for EVHTTP_CON_PUBLIC_FLAGS_END, in case somebody forgot about
this (implementer I mean).
Refs: #182
@asn-the-goblin-slayer:
"the name_parse() function in libevent's DNS code is vulnerable to a buffer overread.
971 if (cp != name_out) {
972 if (cp + 1 >= end) return -1;
973 *cp++ = '.';
974 }
975 if (cp + label_len >= end) return -1;
976 memcpy(cp, packet + j, label_len);
977 cp += label_len;
978 j += label_len;
No check is made against length before the memcpy occurs.
This was found through the Tor bug bounty program and the discovery should be credited to 'Guido Vranken'."
Reproducer for gdb (https://gist.github.com/azat/e4fcf540e9b89ab86d02):
set $PROT_NONE=0x0
set $PROT_READ=0x1
set $PROT_WRITE=0x2
set $MAP_ANONYMOUS=0x20
set $MAP_SHARED=0x01
set $MAP_FIXED=0x10
set $MAP_32BIT=0x40
start
set $length=202
# overread
set $length=2
# allocate with mmap to have a seg fault on page boundary
set $l=(1<<20)*2
p mmap(0, $l, $PROT_READ|$PROT_WRITE, $MAP_ANONYMOUS|$MAP_SHARED|$MAP_32BIT, -1, 0)
set $packet=(char *)$1+$l-$length
# hack the packet
set $packet[0]=63
set $packet[1]='/'
p malloc(sizeof(int))
set $idx=(int *)$2
set $idx[0]=0
set $name_out_len=202
p malloc($name_out_len)
set $name_out=$3
# have WRITE only mapping to fail on read
set $end=$1+$l
p (void *)mmap($end, 1<<12, $PROT_NONE, $MAP_ANONYMOUS|$MAP_SHARED|$MAP_FIXED|$MAP_32BIT, -1, 0)
set $m=$4
p name_parse($packet, $length, $idx, $name_out, $name_out_len)
x/2s (char *)$name_out
Before this patch:
$ gdb -ex 'source gdb' dns-example
$1 = 1073741824
$2 = (void *) 0x633010
$3 = (void *) 0x633030
$4 = (void *) 0x40200000
Program received signal SIGSEGV, Segmentation fault.
__memcpy_sse2_unaligned () at memcpy-sse2-unaligned.S:33
After this patch:
$ gdb -ex 'source gdb' dns-example
$1 = 1073741824
$2 = (void *) 0x633010
$3 = (void *) 0x633030
$4 = (void *) 0x40200000
$5 = -1
0x633030: "/"
0x633032: ""
(gdb) p $m
$6 = (void *) 0x40200000
(gdb) p $1
$7 = 1073741824
(gdb) p/x $1
$8 = 0x40000000
(gdb) quit
P.S. plus drop one condition duplicate.
Fixes: #317
@asn-the-goblin-slayer:
"Length between '[' and ']' is cast to signed 32 bit integer on line 1815. Is
the length is more than 2<<31 (INT_MAX), len will hold a negative value.
Consequently, it will pass the check at line 1816. Segfault happens at line
1819.
Generate a resolv.conf with generate-resolv.conf, then compile and run
poc.c. See entry-functions.txt for functions in tor that might be
vulnerable.
Please credit 'Guido Vranken' for this discovery through the Tor bug bounty
program."
Reproducer for gdb (https://gist.github.com/azat/be2b0d5e9417ba0dfe2c):
start
p (1ULL<<31)+1ULL
# $1 = 2147483649
p malloc(sizeof(struct sockaddr))
# $2 = (void *) 0x646010
p malloc(sizeof(int))
# $3 = (void *) 0x646030
p malloc($1)
# $4 = (void *) 0x7fff76a2a010
p memset($4, 1, $1)
# $5 = 1990369296
p (char *)$4
# $6 = 0x7fff76a2a010 '\001' <repeats 200 times>...
set $6[0]='['
set $6[$1]=']'
p evutil_parse_sockaddr_port($4, $2, $3)
# $7 = -1
Before:
$ gdb bin/http-connect < gdb
(gdb) $1 = 2147483649
(gdb) (gdb) $2 = (void *) 0x646010
(gdb) (gdb) $3 = (void *) 0x646030
(gdb) (gdb) $4 = (void *) 0x7fff76a2a010
(gdb) (gdb) $5 = 1990369296
(gdb) (gdb) $6 = 0x7fff76a2a010 '\001' <repeats 200 times>...
(gdb) (gdb) (gdb) (gdb)
Program received signal SIGSEGV, Segmentation fault.
__memcpy_sse2_unaligned () at memcpy-sse2-unaligned.S:36
After:
$ gdb bin/http-connect < gdb
(gdb) $1 = 2147483649
(gdb) (gdb) $2 = (void *) 0x646010
(gdb) (gdb) $3 = (void *) 0x646030
(gdb) (gdb) $4 = (void *) 0x7fff76a2a010
(gdb) (gdb) $5 = 1990369296
(gdb) (gdb) $6 = 0x7fff76a2a010 '\001' <repeats 200 times>...
(gdb) (gdb) (gdb) (gdb) $7 = -1
(gdb) (gdb) quit
Fixes: #318
Otherwise that #ifdef in visibility.h is useless, and __declspec(dllimport)
will be always on.
Fixes: #314
Fixes: 4545fa9b6866df47ce2f908631a84477a94d5f49 ("Add option to build shared
library")
Since ssize_it is POSIX, windows/VS also have this but with BaseTsd.h, plus the
logic prefers "ssize_t" (lower) instead of "SSIZE_T" (upper) when the latest
only available -- fix this too.
Refs: #311
Since we have some issues (see refs) for changing waiting order in event_del()
I wrote this simple test, so maybe this test can explain something or at least
cover what we have before and show it will be broken.
P.S. we really need avoid such stuff like lets-test-with-sleep/usleep.
Refs: #225
Refs: #226
Refs: #236
* event_reinit-for-signals-v3:
test/regress: cover existing signal callbacks and fork() + event_reinit()
test/regress: cover signals after fork() + event_reinit()
test/regress: main/fork: rewrite assertions by just removing event in callback
event_reinit: make signals works after fork() without evsig_add()
event_reinit: always re-init signal's socketpair
Fixes#307
Instead of assigning some variable value (got_child), and schedule exit from
loop from that callback, just remove event for that signal, and event loop will
exit automatically when there will be no events.
event_reinit() removes the event, but only evsig_add puts it back. So any
signals set up before event_reinit will be ignored until another signal is
added.
Fixes: #307
Before this patch event_reinit() only closes the signal socketpair fds and
recreates them if signals have been added, but this is wrong, since socketpair
fds created on backend init, and if we will not re-create them bad things in
child/parent signal handling will happens (and indeed this is what happens for
non-reinit backends like select).
Fixes: #307