[Catalyst] Pager and Cache

Mitch Jackson perimus at gmail.com
Tue May 20 19:10:29 BST 2008

> Im not sure but i think you missed the point.
> The first query would be:
> select * from Foo where ($conditions) limit $size_plus_one;
> the queries after that would be
> select * from Foo where ($conditions) and id>=$plus_ones_record_id
> limit $size_plus_one.
> Thus the actual criteria involved is irrelevent.  For instance the
> case im talking about i have about 20 possible criteria they could be
> filtering on. But it uses the exact same "start the search from where
> you left off" logic regardless.

I understand the approach and how it improves DBMS efficiency, however
I see few cases where I could implement it and it feels a bit like
overengineering.  I'm not saying at all that it's wrong, just that I
don't see it working for me.  TIMTOWTDI is very much true and alive in
Catalyst, highlighted by this discussion.

This assumes the records will always be in the same order, and always
in the order of your ID.  For me, this is rarely the case.  Once you
start sorting by other columns, or joined columns, or in a different
order, this approach is broken.

For a simple common example, a page from a long list of users.  To use
$last_id_plus_1, I would first need a numeric id on my user table
(unlikely, as the username is already a unique identifier so adding an
autoincrement INT would be pointless and break normalization) and I
would have to display the records only in the order they were added.

Using this approach, the end user looses the ability to, say, sort by
group.  Sort by email address.  Sort by username.  The only sort
option they have is the application imposed one, of sort by date
entered ascending.

I'm about to implement the { rows => 10, page => 5 }# LIMIT 40,10
approach on a table with millions of rows with non-trivial where
clauses on top of a paged ExtJS grid.  I suppose I'll be seeing
first-hand is this is a "wrong" approach.

/Mitchell K. Jackson

More information about the Catalyst mailing list