[Catalyst] Re: Pager and Cache
demerphq at gmail.com
Tue May 20 14:02:42 BST 2008
2008/5/20 Aristotle Pagaltzis <pagaltzis at gmx.de>:
> * demerphq <demerphq at gmail.com> [2008-05-20 01:10]:
>> If you want to exploit indexes in paging properly you need to
>> involve an index in the search criteria and remember the last
>> fetched value. IE:
>> select * from Foo where id >= last_id_fetched LIMIT $size
> (Note to readers: using `>=` instead of `>` here is crucial.
> Think about why.)
Er actually you got me there. :-) I had even contemplated posting a
corrective follow up so im curious what i'm missing. (Although having
the last record of the previous fetch including in the next page is
often nice from continuity perspective.)
Theres another trick, use $size+1 when fetching, do not display the +1
record and use it as a flag to indicate that there are more pages
available. If you get the extra record show the "next page link", if
you dont then you have reached the end of the data set. And in this
case the >= is important, as you will want to refact the +1 record as
the 0 record on the next page...
>> Of course this approach is more susceptible to the data
>> changing behind your back, but all paging suffers this problem
> Actually, this is more stable than `OFFSET` paging: it ignores
> any records inserted on in pages you have already seen, so as
> long as you only page in one direction, you will never see any
> records twice.
This is what i meant about data changing behind your back. But yes i
agree in some ways it is more attractive behaviour.
>> but unlike "standard paging" is has the problem that its not
>> "reversible" in the same way that LIMIT $offset, $size is,
>> resulting in "uni-directional" paging.
> Just keep track of both the largest and smallest key value on the
> last page seen. Then changing directions while paging is easy:
> you just use `<=` or `>=` as appropriate.
Ah. Heh. I had overlooked that option. :-)
perl -Mre=debug -e "/just|another|perl|hacker/"
More information about the Catalyst