|
In lugnet.people, Todd Lehman writes:
> I'm kinda wary of that because it is so trivial for a potential attacker to
> fork multiple copies of itself and work right arond the delay as if it wasn't
> even there. If you delay 3 seconds, then a cracker program just forks extra
> copies of itself and works in parallel. So sleep-delays over HTTP don't
> count for much. On the other hand, a server could probably get around that
> by making a password mutex for each IP address, whereupon failure the
> process who owns the mutex would delay some number of seconds before
> releasing the mutex to the next process. That way, no HTTP process checking
> a pw could step around any other.
I've been thinking about this more tonight, and reading a bit about SysV
semaphores, but I don't have experience with them and I'm finding the docs
confusing, especially where Perl is concerned. Anyway, upon further
reflection, I wonder if semaphores (or mutexes) are even necessary here.
What about this instead: Upon pw mismatch (failure), increment a counter in
a data table, indexed by the IP address of the HTTP client. Also store the
timestamp of the most recent failure. Now, if the counter is greater than,
say, 5, then put the current process to sleep for some short amount of time,
blocking the socket to the client. And then periodically, say, once per hour,
on a cron job, remove stale items (IP addresses that haven't produced a failed
login attempt in the past hour or so).
The objective is to limit the overall throughput of brute force or dictionary
cracking attempts, so it wouldn't be necessary to delay upon success, and in
fact delaying upon success (after failure) would make it possible for a
cracker on a shared HTTP proxy server to DoS other innocent people making
legitimiate requests from the same shared IP address. So not delaying upon
success, even after failure, prevents DoS on shared proxy servers. :-)
To make things more interesting and progressively harder for attackers,
compute the delay time in seconds t(n) as a function of the number n of
recently occuring failed attempts from a given IP address. Some possible
definitions for t(n):
t(n) = n
t(n) = max(0, n-5)
t(n) = 1 + log_2(n)
t(n) = int(sqrt(sqrt(n)))
etc.
I'd probably go with something like
t(n) = max(10, 1 + log_2(n))
but fundamentally, you just want dt/dn to be non-negative.
So, this sort of a deterrant would be super-easy to code, and I think it would
be effective without getting in anything good's way. Additionally, it has a
kind of "short-term memory" about IP addresses it doesn't trust. All it takes
is a db table, a job to remove stale counts periodically, and a trivial
function to compute the delay.
Has anyone heard of this sort of deterrant before? If so, what is it called?
And are there any insidious pitfalls? One obvious greatest weakness is that a
client could fork a zillion times and make a zillion simultaneous connections
and cause overall DoS as processes build up and overflow swapspace, but heck,
that risk always exists on a webserver regardless of what any process of
sufficient complexity does for a living, unless the webserver can dynamically
block IP addresses at the low TCP/IP level somehow. Then the attacker would
have to do some other kind of flooding. Anyway...
It might even be sufficient not ever to worry about periodically cleaning out
the failed-count accumulators -- just let them climb indefinitely -- because
it only delays upon failure, never upon success. The worst that could happen
to an innocent person then is someone logging in interactively via a shared
proxy server and getting a mysteriously long delay if they accidentally typed
their password wrong. It might not hurt to make the DB table indexed on the
IP-address and userid pair -- something problematic without progressive delays
but probably safe with increasing delays like this.
--Todd
|
|
Message has 1 Reply:
2 Messages in This Thread:
- Entire Thread on One Page:
- Nested:
All | Brief | Compact | Dots
Linear:
All | Brief | Compact
|
|
|
|