|
| 1 | + |
| 2 | +state management/nginx machinery |
| 3 | +- all requests to digest-protected pages are funneled through a single mutex accessing an |
| 4 | + rbtree of recently issued nonce values. the logic for evicting saved nonces from the tree |
| 5 | + over time is fairly simple, but what is unclear to me is how to schedule these garbage |
| 6 | + collection runs. |
| 7 | + |
| 8 | + the cleanup hook on the request's pool seems like a good candidate since it won't block the |
| 9 | + request that triggered it. perhaps with an atomic flag in the shm segment preventing traffic |
| 10 | + jams of cleanup? |
| 11 | + |
| 12 | +- a larger unknown (to me) is the performance characteristics of tree maintenance over the |
| 13 | + tree sizes (also unknown) likely to be seen in production use. it would be good to have |
| 14 | + some solid numbers on which to base shm-size and eviction defaults. |
| 15 | + |
| 16 | +- there's a fair amount of painful parsing code devoted to unpacking the key/value fields |
| 17 | + in the Authorize header. i have to believe i'm just unaware of an nginx built-in of some |
| 18 | + sort that will do this part for me. however the docs only led me to a string-level |
| 19 | + representation of the header. |
| 20 | + |
| 21 | +- there should be a directive letting you specify that only particular users in a realm may |
| 22 | + log in. how to handle wildcards though; maybe "*" or "any"? "_" or "none"? |
| 23 | + |
| 24 | + |
| 25 | +rfc 2617 |
| 26 | +- currently lacks backward compatibility with clients that don't provide `qop' fields in |
| 27 | + the Authorize header. according to the rfc the server should work without it, but is it |
| 28 | + worth supporting the less secure version of an already not-bulletproof authentication |
| 29 | + scheme? |
| 30 | + |
| 31 | +- should the 401 response also offer a basic auth option if that module is also enabled |
| 32 | + for a given location block? is there a way for one module to read another's config to |
| 33 | + detect the overlap? or is this a module-loading-order issue (c.f., the way the fancy_index |
| 34 | + module inserts itself before the built-in autoindex module in its HTTP_MODULES config var)? |
| 35 | + |
| 36 | +- the opaque field is not used when generating challenges, nor is it validated when included |
| 37 | + in an authentication request. is this a significant omission? the spec makes it seem as |
| 38 | + though it only exists as a convenience to stash state in, but i could believe some software |
| 39 | + out there depends upon it... |
| 40 | + |
| 41 | +general (in)security |
| 42 | +- i followed the model of the auth_basic module which pages through the password file's |
| 43 | + contents on every request. i know from experience that it's impossible for me to write that |
| 44 | + sliding-window-through-a-buffer routine without a creating at least a couple off-by-one |
| 45 | + errors and non-terminated strings. it would be nice to find them. |
| 46 | + |
| 47 | +- also as a result of the auth_basic-inspired character-by-character verification of the |
| 48 | + auth credentials, the current implementation could be vulnerable to timing attacks (since |
| 49 | + it returns as soon as it finds a match). the simplest solution to this would seem to be adding |
| 50 | + a sleep(random()) delaying the response by a few (dozen? hundred?) milliseconds. i presume the |
| 51 | + non-blocking way to do this would be to use a timer? |
| 52 | + |
| 53 | +- OOM conditions in the shm segment are not handled at all well at the moment leading to an |
| 54 | + easy DOS attack. valid nonces are added to the shm and expired seconds or minutes later. |
| 55 | + Once the shm is full no new nonces can be remembered and all auth attempts will fail until |
| 56 | + enough space has been claimed through expiration. It's unclear to me whether it's possible |
| 57 | + to realloc the shm segment to a larger size after config-time (or if additional segments |
| 58 | + could be alloc'd to allow for a bank-switching solution). If the amount of memory really is |
| 59 | + finite, then that argues for either more aggressive eviction in lomem conditions or for |
| 60 | + moving the state storage to the filesystem. Could nginx's file caching machinery be used |
| 61 | + for managing expiration? |
| 62 | + |
0 commit comments