NYCPHP Meetup

NYPHP.org

[nycphp-talk] Paging through large result sets

(kris)janis p gale sharpwit at hotmail.com
Thu Aug 29 12:41:57 EDT 2002


> But what if the query has multiple joins? It seems inefficient to re-run
the
> query each time. If I want to cache the original result in a new MySQL
> table, then I have to manage that on a per-user basis. This implies using
> cookies or session ID to track the user.

not sure, but a strategy i used to use in coldfusion/ms-sql (before you all
gag, be happy to know i've migrated myself to learning php/mysql during this
period of unemployment) involved performing the query, then redirecting the
user to a view-results page which used an URL variable set by the
query/redirect script containing the id's of the items in the result set.

so let's say the user searched for products - one script would perform the
rather complex set of queries for the search, build an id list, redirect to
an URL looking like /?id_list=0.2.5.7.8.9.24.78, and the results page would
re-query the db for the product details based on a subset of the
dot-delimited id_list.

said query would look something like:
q = mysql_query("SELECT id, name, desc FROM product WHERE id IN (" .
replace(".",",",HTTP_GET_VARS["id_list"])) . ")") or die("!mysql_query");

this Might seem problematic for large result sets, but actually you'd be
surprised how much data you can stuff into an URL with no worries
whatsoever.



More information about the talk mailing list