NYCPHP Meetup

NYPHP.org

[nycphp-talk] Many pages: one script

Hans Zaunere lists at zaunere.com
Tue Aug 7 08:42:12 EDT 2007



Elliotte Harold wrote on Monday, August 06, 2007 6:40 PM:
> David Krings wrote:
> > Elliotte Harold wrote:
> > > Edward Potter wrote:
> > > > hmmmm, I have never found this to be a problem.  Using
> > > > includes, you can pull in .php code from anywhere, even pages
> > > > with a .php extension may be 99.99% html, with a  just a single
> > > > include('foo.php') in it. Keeps things super streamlined, and
> > > > your pages are very readable. 
> > > > 
> > > 
> > > You've got it backwards. I want one script to service many URLs,
> > > not many scripts to service one URL.
> > > 
> > 
> > And that is exactly how Edward described it. You include the one
> > script into the many URLs you want to make use of the script.
> > I guess that isn't what you are after, it just reads that way.
> 
> What you are proposing is not one script to service N URLs. It is N+1
> scripts to service N URLs. It is still necessary to create N separate
> loader scripts and place them at the right locations. That doesn't
> scale, and it's an extraordinary waste of resources when a small
> percentage of the possible URLs will ever be reached.
> 
> The goal here is to avoid having to manually create and maintain
> separate files for each URL. One file: many URLs. That's the goal.
> 
> Imagine, for example, a site with a separate URL structure for each
> user, or a separate URL for each date in history.
> 
> The point about this being an Apache problem rather than a PHP problem
> is understood, except that Java/Tomcat/mod_jk does seem able to

Sure they do - because Java requires essentially it's own server to run.

> accomplish what I'm looking for, so the real lack may not be in Apache
> but in now PHP connects to Apache.

Quite true - mod_perl can do more, but as far as I know, still not as much
as Tomcat since they're linked at the hip.

> Overall, though, I suspect all parties (Apache, PHP, Tomcat, Rails.
> etc.) are still too mired in circa-1994 models of web servers serving
> file systems. For example, once you map /foo to a servlet you can't
> then map /foo/bar to something else.

Hmm, interesting...

> I'm curious if they're any web servers out there that do not start
> with the assumption that each URL maps to a file somewhere. What if a

While it seems like a good idea, it'd probably cause more grief... think of
all those poor images, CSS, PDFs, JS, static HTML, etc. files out there that
we assume get served directly, correctly, statically - and quickly - right
from the filesystem.

> web server were designed to allow all URLs to be delegated to specific
> handlers? A file system handler need be only special case, no

Well that essentially exists with AddHandler and friends in Apache, and is
essentially the crux of many of the techniques described in this thread.

> different from a database handler or a PHP handler.
> 
> RESTful API *design* is fairly easy to do. RESTful API
> *implementation* is fairly hard because no servers I've seen provide
> sufficient flexibility. Coming up on the Web's 20th anniversary, we
> still haven't learned how to take HTTP on its own terms rather than
> by pretending its something else we're more familiar with.
> Revolutions take time. :-) 

Agreed - I'm still waiting for XSLT to take us by storm.  And I keep that
Javascript turned off in my browser, since no web site should depend on it
being available... right?

Web's 20 ?= Web 2.0  It's just a decimal point away...

H




More information about the talk mailing list