NYCPHP Meetup

NYPHP.org

[nycphp-talk] Paging through large result sets

Michael mogmios at mlug.missouri.edu
Tue Sep 3 13:59:15 EDT 2002


My strategy is typically to have a HEAP table that I can dump the results
in all tied to a search id (microtime hashed into a nice string) and then
each time you pass in that search id, your starting point in the search,
and the number of results you want and as yous ay use LIMIT to control
which results you get back. This is pretty simple to implement and is fast
no matter how compelx the original query.

Another practice that can help is to run a nightly script (cron jobs are
your friend) to unnormalize your data into a table that is much quicker to
search and doesn't require joins and things like that. Again if you have
the memory then consider making the table a HEAP so that it sits in memory
and is quicker to search. Since this type of data is regenerated from
other data it doesn't matter if you were to lose it unexpectedly. :)


Don't dream it. Be it.

;):):-):):-):):-)8')
Michael McGlothlin <mogmios at mlug.missouri.edu>
http://kavlon.org/projects/

On Thu, 29 Aug 2002, Mike Myers wrote:

>
> First: I am a newcomer to MySQL and PHP. So I will be asking many newbie
> questions, which I hope is not a drag!
>
> Environ: Macintosh G4 running OSX, MySQL and PHP installations provided by
> the inimitable Marc Liyanage. Webserver is Apache.
>
> Experience: Proficient with UserLand Frontier scripting and webserving;
> moderate experience with perl and javascript. PHP looks pretty
> straightforward.
>
> I have already built a few MySQL databases from scratch, so I have some
> experience with the mysql client, mysqladmin, and mysqlimport. Lately I have
> also been using the web-based frontend phpMyAdmin.
>
> ----
>
> What are the implementation strategies for allowing a user to page through a
> result set that is too large to view in its entirety?
>
> I see that for simple queries (eg. 1 table or 1 join), the LIMIT statement
> is the easy solution, which entails re-running the SQL query as the user
> browses.
>
> But what if the query has multiple joins? It seems inefficient to re-run the
> query each time. If I want to cache the original result in a new MySQL
> table, then I have to manage that on a per-user basis. This implies using
> cookies or session ID to track the user.
>
> This also suggests I need to code a separate process that periodically drops
> these temporary result tables after a defined time has passed. Thus, I need
> to continually track the time of last access to each temp table. That data
> could go in another MySQL table, or a file of my own convention.
>
> Whew! There are probably other aspects of this arrangement that require
> management code.
>
> Am I on the right track here, or am I making it harder than it needs to be?
>
> -- mike myers
>
>
>




More information about the talk mailing list