On Linux, we use only one fd to do main-thread signaling (since we have
eventfd()), so we don't need to close th_notify_fd[1] as we would if we were
using a socketpair.
We were using the same bufferevent as the child of two filtering parents,
orphaning another. This made one get freed twice, and the other not at all.
Possible fix for bug 2963306 spotted by Doug Cuthbertson.
Previously it would only accept 2 iovecs at most, because our
previous_to_last nonsense didn't let it take any more. This forced us
to do more reallocations in some cases when an extra small malloc
would have sufficed.
This actually makes some of the code a lot simpler. The only
ones that actually used previous_to_last for anything were reserving
and committing space.
This is the first patch in a series to replace previous_to_last with
last_with_data. Currently, we can only use two partially empty chains
at the end of an evbuffer, so if we have one with 511 bytes free, and
another with 512 bytes free, and we try to do a 1024 byte read, we
can't just stick another chain on the end: we need to reallocate the
last one. That's stupid and inefficient.
Instead, this patch adds a last_with_data pointer to eventually
replace previous_to_last. Instead of pointing to the penultimated
chain (if any) as previous_to_last does, last_with_data points to the
last chain that has any data in it, if any. If all chains are empty,
last_with_data points to the first chain. If there are no chains,
last_with_data is NULL.
The next step is to start using last_with_data everywhere that we
currently use previous_to_last. When that's done, we can remove
previous_to_last and the code that maintains it.
Once, for reasons that made sense at the time, we had evdns.c use its
own logging subsystem with two levels, "warn" and "debug". This leads
to problems, since setting a log handler for Libevent wouldn't actually
trap these messages, since they weren't on by default, and since some of
the warns should really be msgs.
This patch changes the default behavior of evdns.c to log to
event_(debugx,warnx,msgx) by default, and adds a new (internal-use-only)
log level of EVDNS_LOG_MSG. Programs that set a evdns logging
function will see no change. Programs that don't will now see evdns
warnings reported like other warnings.
Remeber, win32 has a socket type that's actually a handle, so if
there's a chance that code is run on win32, we can't use "int" as the
socket type.
This isn't a blind search-and-replace: sometimes an fd is really in
fact for a file, and not a socket at all.
This makes some cases of bench_http about 5% faster.
Our internal evbuffer_strpbrk() function was overly general (it tried
to handle all character sets when we only used it for "\r\n"), and
not very efficient (it called memchr once for each character in the
buffer until it found a \r or a \n). It actually showed up in some
profiles for HTTP testing, since evbuffer_readln() calls it when doing
loose CRLF detection. This patch replaces it with a faster
implementation.
These were introduced and deprecated in the same version (2.0.1-alpha),
presumably in two-stage process. Everybody sane should be using
evsignal_assign() and evsignal_new() instead.
It looks like I accidentally removed most of WIN32-Code/event-config.h
when I was bumping the version. Fortunately, this happened when I
bumped to 2.0.4-alpha-dev rather than when I bumped to 2.0.4-alpha. :)
This patch restores the deleted parts of WIN32-Code/event-config.h
There should be no need to call be_socket_enable: that does an
event_add(). What we really want to do is event_active(), to make
sure that the writecb is executed.
Also, there was one "} if () {" that was missing an else.
I've noted that the return value for evutil_socket_connect() is
getting screwy, but since that isn't an exported function, we can fix
it whenever.
The different bufferevent implementations had different behavior for
their timeouts. Some of them kept re-triggering the timeouts
indefinitely; some disabled the event immediately the first time a
timeout triggered. Some of them made the timeouts only count when
the bufferevent was actively trying to read or write; some did not.
The new behavior is modeled after old socket bufferevents, since
they were here first and their behavior is relatively sane.
Basically, each timeout disables the bufferevent's corresponding
read or write operation when it fires. Timeouts are stopped
whenever we suspend writing or reading, and reset whenever we
unsuspend writing or reading. Calling bufferevent_enable resets a
timeout, as does changing the timeout value.
When we fixed persistent timeouts to make them reset themselves
based on the previous scheduled time rather than the current
time... we made them do so regardless of whether the event was
triggering because of a timeout or not!
This was of course bogus. When a _timeout_ triggers, we should
schedule the event for N seconds based on the last
_schedule_ time... but when IO triggers, we should reset the
timeout for N seconds after now.
I found these by adding an EVENT_BASE_ASSERT_LOCKED() call to most
of the functions in event.c that can only be called while holding
the lock.
event_reinit() never grabbed the lock, but it needed to.
event_persist_closure accessed the base to call event_add_internal()
and gettime() when its caller had already dropped the lock.
event_pending() called gettime() without grabbing the lock.
This should end the family of bugs where we call bufferevent_free()
while a pending callback is holding a reference on the bufferevent,
and the callback tries to invoke the user callbacks before it releases
its own final reference.
This means that bufferevent_decref() is now a separate function from
bufferevent_free().
When we're doing a lookup in preparation for doing a connect, we
might have an unconnected socket on hand, and mustn't actually do
any reading or writing with it.