The epoll interface ordinarily gives us one-millisecond
precision, so on Linux it makes perfect sense to use the
CLOCK_MONOTONIC_COARSE timer. But when the user has set the new
PRECISE_TIMER flag for an event_base (either by the
EVENT_BASE_FLAG_PRECISE_TIMER flag, or by the EVENT_PRECISE_TIMER
environment variable), they presumably want finer granularity.
On not-too-old Linuxes, we can achieve this using the Timerfd
mechanism, which accepts nanosecond granularity and understands
posix clocks. It's a little more expensive than just calling
epoll_wait(), so we won't do it by default.
This function uses a C program to generate its output, and then uses a
Python program to check it for correctness. On systems without
Python, we just make sure that the C program doesn't crash.
It's likely that we should be requiring some particular python version.
This is an alpha, though: I'm sure somebody will tell us which.
This test opens a server socket, and forks a child which connects to that
server socket many times. It sets a low number for the max open file limit
to catch any file descriptor leaks.
It would not work on Windows since it uses fork() to be able to create both the
server and the clients.
The main reason for disabling installation is if you're building
libevent as a subpackage for embedding: you want to have your main
package's "make all" build libevent, but you don't want your main
package's "make install" to install libevent.
The only changes needed were to handle the fact that the methodname
"epoll (with changelist)" matches the environment variable
EVENT_NOEPOLL rather than the imaginary "EVENT_EPOLL (WITH CHANGELIST)".
Eventually test-changelist should expand to try more cases, maybe
query the status of the actual changelist somehow, and integrate it
with the rest of the unit tests.
Also, add test-changelist to gitignore.
The default behavior of test.sh was to suppress all output from
test/regress, and say nothing but OKAY or FAILED. This wasn't so good
for getting bugs reported, since lots of people didn't know to set
TEST_OUTPUT_FILE, or re-run ./test/regress on its own.
Now, when you don't specify an output file for test.sh, it runs
regress with the --quiet option. This option makes the unit tests
only print output on failure, which is what we probably wanted.
It turns out that in all conformant shells, "unset FOO" removes FOO
both from the shell's variables and from the exported environment.
(I've tested this on msys, opensolaris, linux, osx, and freebsd.)
And in nearly every shell I can find, "unset FOO; export FOO" does
the same as unset FOO... except in my FreeBSD VM, where the "export
FOO" sets the exported value of FOO equal to "". This broke test.sh
for us.
The fix is simple: remove the needless exports!
This required:
- Adding another WIN32 section in test.sh
- not running "touch /dev/null"
- calling WSAStartup in all the test binaries
- Fixing a dumb windows-only bug in test-time.c
Some systems have a version of /bin/sh whose builtin echo doesn't
support the -n option used in test/test.sh. /bin/echo, however,
usually does. This patch makes us use /bin/echo for echo -n whenever
it is present.
Also, our use of echo -n really only made sense when suppressing all
test output. Since test output isn't suppressed when logging to a
file, this pach makes us stop using echo -n when logging to a file.
By default, the test.sh script still suppresses the output of all the
tests it invokes. Now, however, you can have that output written to
a file specified in the TEST_OUTPUT_FILE shell variable.
Remove rtsig method, as discussed in July. It hasn't compiled for quite a while, and nobody has seemed to miss it much. Please let us know if this was a bad call. [Tracker issue 1826539].
svn:r485