NOTE: This is not the official release until I tag it. If you see
this commit, and you decide that Libevent 2.0.8-rc is now
finalized, you might get something besides 2.0.8-rc.
When handshaking, we listen for reads or writes from the
transport. But when we're connected, we start out with writes enabled
and reads disabled, which means we should not have the transport read
for us.
Previously, whenever writing was disabled on a bufferevent_filter (or
a filtering SSL bufferevent), we would stop writing on the underlying
bufferevent. This would make for trouble, though, since if you
implemented common patterns like "stop writing once data X has been
flushed", your bufferevent filter would disable the underlying
bufferevent after the data was flushed to the underlying bufferevent,
but before actually having it written to the network.
Now, we have filters leave their underlying bufferevents enabled for
reading and writing for reading and writing immediately. They are not
disabled, unless the user wants to disable them, which is now allowed.
To handle the case where we want to choke reading on the underlying
bufferevent because the filter no longer wants to read, we use
bufferevent_suspend_read(). This is analogous to the way that we use
bufferevent_suspend_write() to suspend writing on a filtering
bufferevent when the underlying bufferevent's output buffer has hit
its high watermark.
Our old code was too zealous about deleting the refill events that
would actually make connections able to read or write again after
they had run out of bandwidth. Under some circumstances, this could
cause a bufferevent to never actually refill one of its
rate-limiting buckets.
Also, the code treated setting a per-connection rate-limit on a
connection that already had a group-limit as if it were changing the
limit on a connection whose allocation had already run out.
This patch fixes both of those problems.
We were looking at the number of bytes read on the wbio, not in the
rbio. But these are usually different BIOs, and the reading is
supposed to happen on the rbio.
The writev() call is limited to at most IOV_MAX iovecs (or UIO_MAXIOV,
depending on whom you ask). This isn't a problem anywhere we've
tested except on OpenSolaris, where IOV_MAX was a mere 16.
This patch makes us go from "use up to 128 iovecs when writing" to
"use up to 128 iovecs when writing, or IOV_MAX/UIO_MAXIOV, whichever
is less". This is still wrong if you somehow find a platform that
defines IOV_MAX < UIO_MAXIOV, but I hereby claim that such a platform
is too stupid to worry about for now.
Found by Michael Herf.
We were trying to check whether any events had really been
notified on an fd before calling evmap_io_active on it, but instead
we were checking for an event pointer, which was always true.
In practice, this patch shouldn't change much, since epoll_wait
shouldn't return an event unless there is actually an event going
on.
Spotted by an anonymous bug reporter on Sourceforge. Closes bug
3078425.
Remember, the code
int is_less_than(int a, unsigned b) {
return a < b;
}
is buggy, since the C integer promotion rules basically turn it into
int is_less_than(int a, unsigned b) {
return ((unsigned)a) < b;
}
and we really want something closer to
int is_less_than(int a, unsigned b) {
return a < 0 || ((unsigned)a) < b;
}
.
Suggested by an example from Ralph Castain
Jason Toffaletti discovered with helgrind that our signal handler was
messing with evsig_base, which can be set from lots of places in the
code. Ordinarly, we'd just stick a lock on it, except that it is
illegal (and genuinely error-prone) to call pthread_mutex_acquire()
from inside a signal handler.
The solution is to only store the fd we write to in a static variable,
write the signal number to the fd, and put evsig_cb in charge of
activating signal events.
I have no idea how we'll cope if we want to enable this to handle
siginfo (where available) in the future.
When using the signal.c signal backend, Libevent currently only allows
one event_base to actually receive signals at a time. (This has been
the behavior since at least 1.4 and probably much earlier.) Now, we
detect and warn if you're likely to be racing about which signal goes
to which thread.
We also add a lock to control modifications of the evsig_base field,
to avoid race conditions like those found by Jason Toffaletti.
Also, more comments. Comments are good.