[Catalyst] Templates in the database

Jess Robinson castaway at desert-island.demon.co.uk
Sat Jan 20 22:13:13 GMT 2007



On Sat, 20 Jan 2007, Bill Moseley wrote:

> On Sat, Jan 20, 2007 at 02:25:39PM +0000, Jess Robinson wrote:
>> I have plans to do that (which went as far as writing
>> Template::Provider::DBI and including it in my app), but not actually
>> using yet.
>
> Can you show the Provider code?  Or did you just use some of the code
> examples posted to the TT list over the years (or perhaps examples
> from Slash)?  Did you use the standard Provider's method of caching
> templates (once fetched from the database)?

I kept meaning to upload it to CPAN, once TT 2.15 was out.. so now I have 
(it should go in the TT repository eventually..) I needed to patch 
Template::Provider to make the code somewhat saner, which got into 2.15.
Yes, it does use normal TT caching, not that I've tried it out much.

> Might be handy to have the Provider insert the template into the
> database if not found in the database but found on disk.  Then can
> still use the svn repo as the location for the default templates
> and the database will get initiallized with the templates
> automatically.

Interesting idea.. Not sure that's a provider-internal function though, 
otoh I'm also not sure how to find out which provider provided the file, 
if you try and do it outside.. hmm..

There is, by the way, functionality in Catalyst::View::TT to allow you to
specify multiple providers.. I don't remember if I documented it though.

> Any concern for performance with the templates in the database?
> I suspect not since it's very commonly done.  Probably would be smart
> to have a flag somewhere that indicates when any database template was
> last updated that all processes can check -- a single stat would
> likely be faster than hitting the database a few dozen times to test
> for newer versions.

Well, sane databases cache oft used data in memory, so using TTs caching 
and a DB, I would suspect would actually be faster than disk-based.. No 
figures to prove it though, care to benchmark it?

>
>> I assume this is some sort of CMS-like app where the users can edit
>> templates/data etc?
>
> It's a CMS for the people that run the site -- not for general users,
> although that would be basically the same code.  I'm thinking it will
> be easier to code this myself than teach them how to use subversion. ;)
>
>> I hadn't thought of that side of it yet, but I imagine
>> a decent set of auditing tables would do the trick. Give the templates
>> versions, and always show the newest, or always show 1 which is live, and
>> have the older ones renumbered (this is what our work system does). Or,
>> have the tepmlate table itself stay clean, and have a trigger
>> copy-on-update to an audit table (which keeps down the size of the most
>> used table). (hmm, you said that down below, I should read more)
>
> Ya, I've tracked history a number of ways, but never really sure what
> I like best:
>
> For one database object I have its table plus another history table
> that has the same columns and use triggers to copy the data on update.
>
> Ashley's example stores the diff in the database, where I just store
> previous versions.  I'm not too worried about disk space at this time.

Yeah, we store previous versions.. Makes it easier to switch if there are 
problems (instead of applying a bunch of diffs to the original).

> I also have a bunch of config-type tables where I'd like to know when
> someone changes a column, so in that case I have two tables:
>
>    create table table_history (
>        id              SERIAL PRIMARY KEY,
>        person          integer REFERENCES person,
>        update_time     timestamp(0) with time zone NOT NULL DEFAULT CURRENT_TIMESTAMP,
>        table_name      text NOT NULL,
>        row_id          text NOT NULL
>    );
>
>    create table column_history (
>        change_set      integer REFERENCES table_history ON DELETE CASCADE,
>        column_name     text,
>        column_value    text,
>        PRIMARY KEY     (change_set, column_name)
>    );
>
> Which I don't really like because I'm storing table names and column
> names which just seems dirty.  No referential integrity.

That is a bit scary, why not just copy the whole row, and compute with a 
VIEW or something which ones changed?

> I've thought about using a single table where the table has two
> "primary" keys -- one a sequence and the other a non-unique name (such
> as the template path name).  Then the trick is finding the most
> current.  I could look for the one with the largest primary key, or
> maybe with the most recent "last_updated" timestamp.  But, I think I
> might have boolean "current" and make UNIQUE( path_name, current )
> so that there can only be one current one.
>
> Something like:
>
>    create table template (
>        id              SERIAL PRIMARY KEY,
>        path            text,
>        current         boolean,  -- true if this is the current template
>        created_time    timestamp(0) with time zone not null default now(),
>        update_by       integer REFERENCES person,
>        CONSTRAINT      one_current_template UNIQUE( path, current )
>    );
>    create index template_path ON template ( path );
>    create index template_current ON template ( current );
>
> And maybe a "current_template" view class and override update/insert
> to always insert into the base table.
>
> And never update rather always insert changes.

Could do that too.. I think audit tables and triggers are cleaner in the 
long run.. The common case, fetching a template, doesnt get bogged down by 
all the historical data in the same table.

Jess



More information about the Catalyst mailing list