The old code would use type_var_add() for its side-effect of expanding the
array, then leak the new object that was added to the array.
The new code adds a static function to handle the array resizing.
The old logging code was littered with places where we stored messages in
static char[] fields. This is fine in a single-threaded program, but if you
ever tried to log evdns messages from two threads at once, you'd hit a race.
This patch also refactors evdns's debug_ntop function into a more useful
evutil_sockaddr_port_format() function, with unit tests.
When searching is enabled, evdns may make multiple requests before
calling the user callback with the result. This is a problem because
the same evdns_request handle is not retained for each search request,
so the user cannot reliably cancel the request.
This patch attempts to ensure that evdns_request persists accross
search requests.
Previously, when a signation() or signal() call failed, we would free
the element we added to sh_old, but not actually clear the pointer.
This would leave a dangling pointer in sh_old that could cause a
crash later.
The EVUTIL_CLOSESOCKET() macro required you to include unistd.h in your
source for POSIX. We might as well turn it into a function: an extra
function call is going to be cheap in comparison with the system call.
We retain the EVUTIL_CLOSESOCKET() macro as an alias for the new
evutil_closesocket() function.
(commit message from email by Nick and Sebastian)
This makes evprc setup more extensible, and helps with Shuo Chen's
work on implementing Google protocol buffers rpc on top of Libevent 2
evrpc.
This patch breaks binary compatibility with previous versions of
Libevent, since it changes struct evrpc and the signature of
evrpc_register_generic(). Since all compliant code should be calling
evrpc_register_generic via EVRPC_REGISTER, it shouldn't break source
compatibility.
(Code by Shuo Chen; commit message by Nick)
The evbuffer_remove() function copies data from the front of an
evbuffer into an array of char, and removes the data from the buffer.
This function behaves the same, but does not remove the data. This
behavior can be handy for lots of protocols, where you want the
evbuffer to accumulate data until a complete record has arrived.
Lots of people have asked for a function more or less like this, and
though it isn't too hard to code one from evbuffer_peek(), it is
apparently annoying to do it in every app you write. The
evbuffer_peek() function is significantly faster, but it requires that
the user be able to handle data in separate extents.
This patch also reimplements evbufer_remove() as evbuffer_copyout()
followed by evbuffer_drain(). I am reasonably confident that this
won't be a performance hit: the memcpy() overhead should dominate the
cost of walking the list an extra time.
The previous evbuffer_expand was not only incorrect; it was
inefficient too. On all questions of time vs memory tradeoffs, it
chose to burn time in order to avoid wasting memory. The new code
tries to be a little more balanced: it only resizes an existing chain
when doing so doesn't require too much copying, and when failing to do
so would waste a lot of the chain's space.
This patch also rewrites evbuffer_chain_insert to work properly with
last_with_datap, and adds a few convenience functions to buffer.c.
Apparently nobody had tested it before on a system that had sendfile.
Why would you have sendfile and not writev? Perhaps you're trying to
test the no-iovecs code to make sure it still works.
To implement evbuffer_expand() properly, you need to be able to
replace the last chunk that has data, which means that we need to keep
track of the the next pointer pointing to the last_with_data chunk,
not the last_with_data chunk itself.
evbuffer_pullup() returns NULL if you try to pull up more bytes than
are there. But evbuffer_write_atmost would sometimes ask for more
bytes to be pulled up than it had, get a NULL, and fail.
If the first chunk of a buffer is empty, and we're told to prepend to
the buffer, we should be willing to use the entire first chunk.
Instead, we were dependent on the value of chunk->misalign.
This constant decides the smallest (and typical) size of each evbuffer
chain. Since this number includes sizeof(evbuffer_chain) overhead,
the old value (256) was just too low: on 64-bit platforms, it would
spend nearly 20% of the allocations on overhead. The new values mean
that we'll be spending closer to 5% of evbuffer allocations on overhead.
It would be nice to get this number even lower if we can.
Previously, every call to min_heap_shift_down_() would invoke
min_heap_shift_up_() at the end. This used to be necessary in the
first version of the minheap code, since min_heap_erase() would call
min_heap_shift_down_() unconditionally. But when patch 8b7a3b36763
from Marko Kreen fixed min_heap_erase() to be more sensible, we left
the weird behavior of min_heap_shift_down_() in place.
Fortunately, "cui" noticed this and reported it on Niels's blog.
On 64-bit windows, configure actually _finds_ select when it tests for
it, and due to the ordering of the io implementations in event.c it is
chosen over the win32select implementation.
This modification skips the test for select on win32 (we don't want
that anyway, because Windows has its own), causing my windows box to
get the win32select implementation.
(edited by Nick)