NYCPHP Meetup

NYPHP.org

[nycphp-talk] Experts help needed (Sessions)

inforequest 1j0lkq002 at sneakemail.com
Thu Aug 11 15:13:19 EDT 2005


Dan Cech dcech-at-phpwerx.net |nyphp dev/internal group use| wrote:

>Joseph,
>
>To understand the approach you need to take a step back and look at the 
>bigger picture of what we're trying to achieve.
>
>Say we have a client who always uses the same browser, and all of a 
>sudden you get a request that comes from a different browser.  This does 
>not fit the profile, and so we should prompt for a password because it 
>is likely to be an attacker.
>
>If you have a client who for whatever reason changes their user agent 
>string all the time, you don't want to enforce the user agent check 
>because it will just annoy them, and probably lose you the client.
>
>So, you keep track of the user agent to see which the client in question 
>is.  If you are dealing with the former, enforce the check because it 
>will increase their security, otherwise use a different check.
>
>The counter is there to determine which category the client in question 
>falls into, so when their user agent hasn't changed for some number of 
>page loads you assume they fall into the first category and enforce the 
>check.
>
>This same approach could also be very powerful when applied to IP 
>address checking, to provide protection for clients whose IP does not 
>change often, without affecting those who do.
>
>Dan
>


I think this is a great discussion and very worthwhile, and some of 
these explanations should be very enlightening. However, we need to be 
aware of the problems associated with one of the "80/20 rules" (which 
these approaches "seem" to endorse). (satisfy 80% of the market and 
don't worry about the other non-mainstream 20%).

* Let's not forget that vulnerability usually comes from lazy or 
incomplete coding. When corners are cut (for whatever reasons, with 
whatever validity), vulnerabilities are left in place.After reading this 
thread I can imagine coders carefuly coding so those with static IPs and 
static user agents get standard auth and those who vary get trapped by 
extra security checkpoints that are little more than re-checks of the 
same credentials. Sure, as the argument goes, there may be more risk so 
why not re-check.

* Let's just make sure that you are testing your code well enough that 
your 20% are properly managed (chances are your dev machines will rarely 
go through the extra checks and if they do, you'll probably bypass it 
for convenience).
Let's also make sure we aren't inconveniencing the most important market 
segment in that 20% -- I know some of my privacy websites would fail 
miserably if this sort of "extra security" were deployed.
Once you start inspecting UA, you get into the browser standards world - 
plenty of browser/extension combinations mess with the UA or block it 
altogether. As already stated many times, proxies wreak havoc with IPs 
and UA strings.

 From a user-friendliness perspective, there is little more annoying to 
me than a definitive but erroneous statement from an error message or 
security check (like when I have cookies ENABLED and I get a message 
"Your browser does not accept cookies...please turn them on and try 
again").

Personally I prefer the concept of "translucent data", although I 
recognize it is not for everyone. Start with 
http://www.unixreview.com/documents/s=7781/ur0401o/

-=john andrews
www.seo-fun.com



More information about the talk mailing list