[Catalyst] Pager and Cache
demerphq
demerphq at gmail.com
Tue May 20 14:59:38 BST 2008
2008/5/20 Mitch Jackson <perimus at gmail.com>:
>> If you want to exploit indexes in paging properly you need to involve
>> an index in the search criteria and remember the last fetched value.
>> IE:
>>
>> select * from Foo where id >= last_id_fetched LIMIT $size
>
> Ideal as this might be in theory, I have built very few reports as
> simple as sorting by and searching on a single index field. From
> business metrics to user management to online catalogues, where you
> search by user role, by product type, by category, by title, navigate
> through untold variations on stats sorted and searched dynamically.
> Sure, I could dumb things down a lot to fit into that model, but it
> would be at the inconvenience of the user, who expects flexibility.
>
> It is true, the database has to do a bit of work to deliver these
> results, but that's what the database is for. When you're talking
> about large data sets and lots of concurrent users, you're talking
> about one or more BEEFY database servers ready and willing to do the
> heavy lifting so your web servers don't have to.
Im not sure but i think you missed the point.
The first query would be:
select * from Foo where ($conditions) limit $size_plus_one;
the queries after that would be
select * from Foo where ($conditions) and id>=$plus_ones_record_id
limit $size_plus_one.
Thus the actual criteria involved is irrelevent. For instance the
case im talking about i have about 20 possible criteria they could be
filtering on. But it uses the exact same "start the search from where
you left off" logic regardless.
Cheers,
yves
--
perl -Mre=debug -e "/just|another|perl|hacker/"
More information about the Catalyst
mailing list