Like in `sockaddr_in` structure in /usr/include/netinet/in.h
@azat: convert all other users (bench, compat, ..) and tweak message
Fixes: #178Fixes: #196
Refs: 6bf1ca78
Link: https://codereview.appspot.com/156040043/#msg4
http_uriencode_test() (in test/regress_http.c) has been failed after
72afe4c as "hello\0world" is encoded to "hello" instead of
"hello%00world". This is because of a misplaced overflow check which
causes the non-negative "size" specified in parameter being ignored in
within-bound URI.
Fixes: #392
../http.c:589:6: warning: logical not is only applied to the left hand side of this comparison [-Wlogical-not-parentheses]
if (!req->kind == EVHTTP_REQUEST || !REQ_VERSION_ATLEAST(req, 1, 1))
^ ~~
Otherwise:
http/cancel_by_host_ns_timeout_inactive_server: [msg] Nameserver 127.0.0.1:37035 has failed: request timed out.
[msg] All nameservers have failed
OK
1 tests ok. (0 skipped)
==26211==
==26211== FILE DESCRIPTORS: 3 open at exit.
==26211== Open file descriptor 2: /dev/pts/47
==26211== <inherited from parent>
==26211==
==26211== Open file descriptor 1: /dev/pts/47
==26211== <inherited from parent>
==26211==
==26211== Open file descriptor 0: /dev/pts/47
==26211== <inherited from parent>
==26211==
==26211==
==26211== HEAP SUMMARY:
==26211== in use at exit: 1,112 bytes in 5 blocks
==26211== total heap usage: 149 allocs, 144 frees, 18,826 bytes allocated
==26211==
==26211== 40 bytes in 1 blocks are indirectly lost in loss record 1 of 5
==26211== at 0x4C2BBD5: calloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==26211== by 0x4AAEB2: event_mm_calloc_ (event.c:3459)
==26211== by 0x498F5B: evbuffer_add_cb (buffer.c:3309)
==26211== by 0x4A0EF5: bufferevent_socket_new (bufferevent_sock.c:366)
==26211== by 0x4BFADF: evhttp_connection_base_bufferevent_new (http.c:2375)
==26211== by 0x4BFC8F: evhttp_connection_base_new (http.c:2427)
==26211== by 0x460DAA: http_cancel_test (regress_http.c:1417)
==26211== by 0x490A78: testcase_run_bare_ (tinytest.c:105)
==26211== by 0x490D5A: testcase_run_one (tinytest.c:252)
==26211== by 0x491699: tinytest_main (tinytest.c:434)
==26211== by 0x47E0E0: main (regress_main.c:461)
==26211==
==26211== 136 bytes in 1 blocks are indirectly lost in loss record 2 of 5
==26211== at 0x4C2BBD5: calloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==26211== by 0x4AAEB2: event_mm_calloc_ (event.c:3459)
==26211== by 0x491FF0: evbuffer_new (buffer.c:365)
==26211== by 0x49A1BE: bufferevent_init_common_ (bufferevent.c:300)
==26211== by 0x4A0E44: bufferevent_socket_new (bufferevent_sock.c:353)
==26211== by 0x4BFADF: evhttp_connection_base_bufferevent_new (http.c:2375)
==26211== by 0x4BFC8F: evhttp_connection_base_new (http.c:2427)
==26211== by 0x460DAA: http_cancel_test (regress_http.c:1417)
==26211== by 0x490A78: testcase_run_bare_ (tinytest.c:105)
==26211== by 0x490D5A: testcase_run_one (tinytest.c:252)
==26211== by 0x491699: tinytest_main (tinytest.c:434)
==26211== by 0x47E0E0: main (regress_main.c:461)
==26211==
==26211== 136 bytes in 1 blocks are indirectly lost in loss record 3 of 5
==26211== at 0x4C2BBD5: calloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==26211== by 0x4AAEB2: event_mm_calloc_ (event.c:3459)
==26211== by 0x491FF0: evbuffer_new (buffer.c:365)
==26211== by 0x49A1FB: bufferevent_init_common_ (bufferevent.c:305)
==26211== by 0x4A0E44: bufferevent_socket_new (bufferevent_sock.c:353)
==26211== by 0x4BFADF: evhttp_connection_base_bufferevent_new (http.c:2375)
==26211== by 0x4BFC8F: evhttp_connection_base_new (http.c:2427)
==26211== by 0x460DAA: http_cancel_test (regress_http.c:1417)
==26211== by 0x490A78: testcase_run_bare_ (tinytest.c:105)
==26211== by 0x490D5A: testcase_run_one (tinytest.c:252)
==26211== by 0x491699: tinytest_main (tinytest.c:434)
==26211== by 0x47E0E0: main (regress_main.c:461)
==26211==
==26211== 536 bytes in 1 blocks are indirectly lost in loss record 4 of 5
==26211== at 0x4C2BBD5: calloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==26211== by 0x4AAEB2: event_mm_calloc_ (event.c:3459)
==26211== by 0x4A0E15: bufferevent_socket_new (bufferevent_sock.c:350)
==26211== by 0x4BFADF: evhttp_connection_base_bufferevent_new (http.c:2375)
==26211== by 0x4BFC8F: evhttp_connection_base_new (http.c:2427)
==26211== by 0x460DAA: http_cancel_test (regress_http.c:1417)
==26211== by 0x490A78: testcase_run_bare_ (tinytest.c:105)
==26211== by 0x490D5A: testcase_run_one (tinytest.c:252)
==26211== by 0x491699: tinytest_main (tinytest.c:434)
==26211== by 0x47E0E0: main (regress_main.c:461)
==26211==
==26211== 1,112 (264 direct, 848 indirect) bytes in 1 blocks are definitely lost in loss record 5 of 5
==26211== at 0x4C2BBD5: calloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==26211== by 0x4AAEB2: event_mm_calloc_ (event.c:3459)
==26211== by 0x4D0564: evdns_getaddrinfo (evdns.c:4685)
==26211== by 0x4B13BA: evutil_getaddrinfo_async_ (evutil.c:1575)
==26211== by 0x4A139E: bufferevent_socket_connect_hostname (bufferevent_sock.c:524)
==26211== by 0x4C02DB: evhttp_connection_connect_ (http.c:2588)
==26211== by 0x4C04DD: evhttp_make_request (http.c:2643)
==26211== by 0x4615FF: http_cancel_test (regress_http.c:1504)
==26211== by 0x490A78: testcase_run_bare_ (tinytest.c:105)
==26211== by 0x490D5A: testcase_run_one (tinytest.c:252)
==26211== by 0x491699: tinytest_main (tinytest.c:434)
==26211== by 0x47E0E0: main (regress_main.c:461)
==26211==
==26211== LEAK SUMMARY:
==26211== definitely lost: 264 bytes in 1 blocks
==26211== indirectly lost: 848 bytes in 4 blocks
==26211== possibly lost: 0 bytes in 0 blocks
==26211== still reachable: 0 bytes in 0 blocks
==26211== suppressed: 0 bytes in 0 blocks
Since we do close fd there if we don't have BEV_OPT_CLOSE_ON_FREE, and
evcon->fd can be incorrect (non -1), so just get it from the underlying
bufferevent to fix this.
And after this patch the following tests report 0 instead of 2307 fd leaks:
$ valgrind --leak-check=full --show-reachable=yes --track-fds=yes --error-exitcode=1 regress --no-fork http/cancel..
==11299== FILE DESCRIPTORS: 3 open at exit.
And this is stdin/stderr/stdout.
Since it can be non -1, and we must close it, otherwise we will have problems.
And after this patch the following tests report fd 2307 instead of 2309 fd leaks:
$ valgrind --leak-check=full --show-reachable=yes --track-fds=yes --error-exitcode=1 regress --no-fork http/cancel..
==10853== FILE DESCRIPTORS: 2307 open at exit.
For example win32 doesn't accept such things (maybe via overloaded IO, I'm not
sure), also I looked into curl and seems that the behaviour is the same (IOW
like with EVHTTP_CON_READ_ON_WRITE_ERROR on linux/win32).
Fixes: https://ci.appveyor.com/project/nmathewson/libevent/build/2.1.5.216#L499 (win32)
Fixes: 680742e1665b85487f10c0ef3df021e3b8e98634 ("http: read server response
even after server closed the connection")
v2: v0 was just removing that flag, i.e. make it deprecated and set_flags() will return -1
Since now evhttp_parse_response_line() can be called twice because after
"HTTP/1.1 100 Continue" we can have "HTTP/1.1 200"
==29162== 9 bytes in 1 blocks are definitely lost in loss record 1 of 1
==29162== at 0x4C29C0F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==29162== by 0x5CBF0A9: strdup (in /lib/x86_64-linux-gnu/libc-2.21.so)
==29162== by 0x4AA3AC: event_mm_strdup_ (event.c:3493)
==29162== by 0x4BD843: evhttp_parse_response_line (http.c:1680)
==29162== by 0x4BE333: evhttp_parse_firstline_ (http.c:2013)
==29162== by 0x4BEA4F: evhttp_read_firstline (http.c:2243)
==29162== by 0x4BC5F8: evhttp_read_cb (http.c:1136)
==29162== by 0x4993F1: bufferevent_run_readcb_ (bufferevent.c:233)
==29162== by 0x49FBC0: bufferevent_trigger_nolock_ (bufferevent-internal.h:392)
==29162== by 0x49FF10: bufferevent_readcb (bufferevent_sock.c:208)
==29162== by 0x4A474A: event_persist_closure (event.c:1580)
==29162== by 0x4A49F5: event_process_active_single_queue (event.c:1639)
Fixes: 0b46b39e95ad77951176f09782138305ba34edf3 ("http: fix "Expect:
100-continue" client side")
Instead of sending data always at the beginning of the request wait until the
server will respond with "HTTP/1.1 100 Continue".
Before this patch server do send "HTTP/1.1 100 Continue" but client always send
post data even without waiting server response.
P.S. this patch also touches some not 100% related tab-align issues.
Covered-by: http/data_length_constraints
Covered-by: http/read_on_write_error
Since otherwise we can have nasty bugs with part of previous *request* in
current *request* and hence some parsing errors.
And now we have failures:
http/non_lingering_close: [forking] [err] ../http.c:1326: Assertion !evbuffer_drain(tmp, -1) failed in ../http.c
Also since after this patch code became more generic, we now respond with
HTTP_ENTITYTOOLARGE even without "Expect: 100-Continue", which is correct by
RFC.
Refs: #321
v2: remove EVHTTP_CON_ABOUT_TO_CLOSE
By lingering close I mean something what nginx have for this name, by this term
I mean that we need to read all the body even if it's size greater then
`max_body_size`, otherwise browsers on win32 (including chrome) failed read the
http status - entity-too-large (while on linux chrome for instance are good),
and also this includes badly written http clients.
Refs: #321
v2: do this only under EVHTTP_SERVER_LINGERING_CLOSE
Otherwise if we will try to write more data than server can accept
(see `evhttp_set_max_body_size()` for libevent server) we will get `EPIPE` and
will not try to read server's response which must contain 400 error for now
(which is not strictly correct though, it must 413).
```
$ strace regress --no-fork http/data_length_constraints
...
connect(10, {sa_family=AF_INET, sin_port=htons(43988), sin_addr=inet_addr("127.0.0.1")}, 16) = -1 EINPROGRESS (Operation now in progress)
...
writev(10, [{"POST / HTTP/1.1\r\nHost: somehost\r"..., 60}, {"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"..., 16324}], 2) = 16384
epoll_wait(5, [{EPOLLOUT, {u32=10, u64=10}}, {EPOLLIN, {u32=11, u64=11}}], 32, 50000) = 2
writev(10, [{"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"..., 16384}], 1) = 16384
ioctl(11, FIONREAD, [32768]) = 0
readv(11, [{"POST / HTTP/1.1\r\nHost: somehost\r"..., 4096}], 1) = 4096
epoll_ctl(5, EPOLL_CTL_DEL, 11, 0x7fff09d41e50) = 0
epoll_ctl(5, EPOLL_CTL_ADD, 11, {EPOLLOUT, {u32=11, u64=11}}) = 0
epoll_wait(5, [{EPOLLOUT, {u32=10, u64=10}}, {EPOLLOUT, {u32=11, u64=11}}], 32, 50000) = 2
writev(10, [{"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"..., 16384}], 1) = 16384
writev(11, [{"HTTP/1.1 400 Bad Request\r\nConten"..., 129}, {"<HTML><HEAD>\n<TITLE>400 Bad Requ"..., 94}], 2) = 223
epoll_ctl(5, EPOLL_CTL_DEL, 11, 0x7fff09d42080) = 0
shutdown(11, SHUT_WR) = 0
close(11) = 0
epoll_wait(5, [{EPOLLOUT|EPOLLERR|EPOLLHUP, {u32=10, u64=10}}], 32, 50000) = 1
writev(10, [{"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"..., 16384}], 1) = -1 EPIPE (Broken pipe)
--- SIGPIPE {si_signo=SIGPIPE, si_code=SI_USER, si_pid=13954, si_uid=1000} ---
epoll_ctl(5, EPOLL_CTL_DEL, 10, 0x7fff09d42010) = 0
shutdown(10, SHUT_WR) = -1 ENOTCONN (Transport endpoint is not connected)
close(10) = 0
write(1, "\n FAIL ../test/regress_http.c:3"..., 37
```
Careful reader can ask why it send error even when it didn't read
`evcon->max_body_size`, and the answer will be checks for `evcon->max_body_size
against `Content-Length` header, which contains ~8MB (-2 bytes).
And also if we will not drain the output buffer than we will send buffer that
we didn't send in previous request and instead of sending method via
`evhttp_make_header()`.
Fixes: http/data_length_constraints
Refs: #321
v2: do this only under EVHTTP_CON_READ_ON_WRITE_ERROR flag
And we can't make them continuous, since the latest is a public API, and
otherwise we will break binary compatibility.
Also extra check for EVHTTP_CON_PUBLIC_FLAGS_END, in case somebody forgot about
this (implementer I mean).
Refs: #182
Though CloudABI implements a very large part of POSIX, it does not
provide these header files, for the reason that there is no raw device
access, no resource limiting and no access to the global process table
through wait().
It looks like these header files are not actually needed in theory.
There don't seem to be any constructs in these source files that use
these features, but I suspect they might still be required on some
systems.
Before this patch http server don't knows when client disconnected until it
will try to write to it, IOW to detect is client still alive you need to write
something to client socket, however it is not convenient since it requires to
store all clients somewhere and poll them periodically, and I don't see any
regressions if we will leave EV_READ always (like libevhtp do), since we
already reset read callback in evhttp_write_buffer() (see
http/write_during_read).
Also since we don't disable EV_READ anymore we don't need some enable EV_READ,
so we will reduce number of epoll_ctl() calls.
Covered-by: http/terminate_chunked_oneshot
Covered-by: http/write_during_read
Fixes: #78
Before this patch every time we are retrying our request we resolve
domain, but we could optimize this (since UDP is slow) by using cached
conn_address value, so do this.
In http the only case when when we could store it is when we already
connected, *but* if we are doing request using domain name, then we need
to do request to nameserver to get IP address, and this is handled by
bufferevent.
So when we have IP address (from nameserver) and don't have connection
to this IP address, we could already cache it to avoid extra DNS
requests (since UDP is slow), and we can't do this from http layer, only
from bufferevent.
This will fix some invalid read/write:
==556== Invalid read of size 8
==556== at 0x4E4EEC6: event_queue_remove_timeout (minheap-internal.h:178)
==556== by 0x4E508AA: event_del_nolock_ (event.c:2764)
==556== by 0x4E53535: event_base_loop (event.c:3088)
==556== by 0x406FCFA: dispatch (libcrawl.c:271)
==556== by 0x402863: main (crawler.c:49)
==556== Address 0x68a3f18 is 152 bytes inside a block of size 400 free'd
==556== at 0x4C29C97: free (in /usr/local/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==556== by 0x406F140: renew (libcrawl.c:625)
==556== by 0x4E6CDE9: evhttp_connection_cb_cleanup (http.c:1331)
==556== by 0x4E6E2B2: evhttp_connection_cb (http.c:1424)
==556== by 0x4E4DF2D: bufferevent_writecb (bufferevent_sock.c:310)
==556== by 0x4E52D1D: event_process_active_single_queue (event.c:1584)
==556== by 0x4E53676: event_base_loop (event.c:1676)
==556== by 0x406FCFA: dispatch (libcrawl.c:271)
==556== by 0x402863: main (crawler.c:49)
But this one because of some invalid write before (I guess).
It is 100% reproduced during massive crawling (because this process
has many different servers), but after spending some time for trying to
reproduce this using some simple tests/utils I gave up for a few days (I
have a lot of work to do), but I'm sending this patch as a reminder.
Just in case, I've tried next tests:
- mixing timeouts/retries
- shutdown http server and return it back
- slow dns server for first request
- sleep before accept
- hacking libevent sources to change the behaviour of http layer (so it
will go into that function which I'm insterested in).
This patch provides the ability to receive a callback on the completion of a
request. The callback takes place immediately before the request's resources
are released.
evhttp_write_buffer() used by evhttp_send_reply_chunk() can take callback
executed when (part of) the buffer has been written. Using this callback to
schedule the next chunk avoids buffering large amounts of data in memory.
Basically tcp final handshake looks like this:
(C - client, S - server)
ACK[C] - FIN/ACK[S] - FIN/ACK[S] - ACK [C]
However there are servers, that didn't close connection like this,
while it is still _considered_ as valid, and using libevent http layer
we can do requests to such servers.
Modified handshake:
(C - client, S - server)
ACK[C] - RST/ACK[S] - RST/ACK[S]
And in this case we can't extract IP address from socket, because it is
already closed, and getpeername() will return: "transport endpoint is not connected".
So we need to store address that we are connecting to, after we know it,
and that is what this patch do.
I have reproduced it, however it have some extra packages.
(I will try to fix it)
https://github.com/azat/nfq-examples/blob/master/nfqnl_rst_fin.c