The trick here is that if we already told the base to wake up, and it
hasn't woken up yet, we don't need to tell it to wake up again. This
should help lots with inherently multithreaded code like IOCP.
- Increment reference count of bufferevents before initiating overlapped
operations to prevent the destructor from being called while operations
are pending. The only portable way of canceling overlapped ops is to
close the socket.
- Translate error codes to WSA* codes.
- Better handling of errors.
- Add an interface to add and del "virtual" events. Because IOCP
bufferevents don't register any events with the base, the event loop
has no way of knowing they exist. This causes the loop to terminate
prematurely. event_base_{add,del}_virtual increment/decrement base's
event count so the loop runs while there are any enabled IOCP
bufferevents.
Avi Bab correctly noted as bug 3044479 the fact that any thread
blocking on current_event_lock will do so while holding
th_base_lock, making it impossible for the currently running event's
callback to call any other functions that require th_base_lock.
This patch switches the current_event_lock code to instead use a
condition variable that we wait on if we're trying to mess with
a currently-executing event, and that we signal when we're done
executing a callback if anybody is waiting on it.
Though the C standards allow it, it's apparently possible to get MSVC
upset by saying "struct { int field; } (declarator);" instead of
"struct {int field; } declarator;", so let's just not do that.
Bugfix for 3044492
(commit msg by nickm)
Of the backends that support edge-triggered IO, most (all?) do not
support attempts to mix edge-triggered and level-triggered IO on the
same FD. With debugging mode enabled, we now detect and refuse attempts
to add a level-triggered IO event to an fd that already has an
edge-triggered IO event, and vice versa.
Calling event_base_loop on a base from inside a callback invoked by
that same base, or from two threads at once, has long been a way to
get exceedingly hard-to-diagnose errors. This patch adds code to
detect such reentrant invocatinos, and exit quickly with a warning
that should explain what went wrong.
Our hash-table implementation stored a copy of the hash code in each
element. But as we were using it, all of our hash codes were
ridiculously easy to calculate: most of them were just a matter of a
load and a shift.
This patch lets ht-internal be built in either of two ways: one caches
the hash-code for each element, and one recalculates it each time it's
needed.
This patch also chooses a slightly better hash code for
event_debug_entry.
Right now it only catches cases where we aren't initializing events,
or where we are re-initializing events without deleting them first.
These are however shockingly common.
This is necessary or useful for a few reasons:
1) Sometimes applications will add and delete the same event more
than once between calls to dispatch. Processing these changes
immediately is needless, and potentially expensive (especially
if we're on a system that makes one syscall per changed event).
Yes, this actually happens in practice for nonpathological
code, such as in cases where the user's callback conditionally
re-adds a non-persistent event, or where draining a buffer
turns off writing and invokes a user callback which adds more
data which in turn re-enabled writing.
2) Sometimes we can coalesce multiple changes on the same fd into
a single syscall if we know about them in advance. For
example, epoll can do an add and a delete at the same time, but
only if we have found out about both of them before we tell
epoll.
3) Sometimes adding an event that we immediately delete can cause
unintended consequences: in kqueue, this makes pending events
get reported spuriously.
Previously, event_base.activequeues was of type "array of pointers to
eventlist." This was pointless: none of the eventlists were allowed
to be NULL. Worse, it was inefficient:
- It made looking up an active event queue take two pointer
deferences instead of one, thus risking extra cache misses.
- It used more RAM than it needed to, because of the extra pointer
and the malloc overhead.
Also, this patch fixes a bug where we were saying
calloc(N,N*sizeof(X)) instead of calloc(N,sizeof(X)) when allocating
activequeues. That part, I'll backport.
Also, we warn and return -1 on failure to allocate activequeues,
rather than calling event_err.
svn:r1525
Libevent's current timeout code is relatively optimized for the
randomly scattered timeout case, where events are added with their
timeouts in no particular order. We add and remove timeouts with
O(lg n) behavior.
Frequently, however, an application will want to have many timeouts
of the same value. For example, we might have 1000 bufferevents,
each with a 2 second timeout on reading or writing. If we knew this
were always the case, we could just put timeouts in a queue and get
O(1) add and remove behavior. Of course, a queue would give O(n)
performance for a scattered timeout pattern, so we don't want to
just switch the implementation.
This patch gives the user the ability to explicitly tag certain
timeout values as being "very common". These timeout values have a
cookie encoded in the high bits of their tv_usec field to indicate
which queue they belong on. The queues themselves are each
triggered by an entry in the minheap.
See the regress_main.c code for an example use.
svn:r1517
This gives you the property that once you have called event_del(E),
you know that E is no longer running or pending or active at all, and
so it is safe to delete the resource used by E's callback.
svn:r1341
Either I need to make the callbacks get deferred in a base with no events (doable), or I need to make it okay to call launch_read from inside the callback for read (tricky).
svn:r1277
A 'deferred callback' is just a function that we've queued in the
event base. This ability is needed for some mt stuff, and for complex
callback chains. For internal use only.
svn:r1150
a) this is 2009
b) niels and nick have been comaintainers for a while
c) saying "all rights reserved" when you then go on to explicitly
disclaim some rights is sheer cargo-cultism.
svn:r1065