[Dbix-class] DBIx::Class::Storage::DBI::Replicated - read from
master
Bill Moseley
moseley at hank.org
Tue Jun 8 04:31:16 GMT 2010
Sanity check.
I need to have DBIC work with an existing database that includes slaves. As
the Replicated docs note, there's often a replication delay from master to
slaves. So, when a write happens I want to direct all reads to the master
for some period of time (for the specific user that did the write). In
addition, I need a way to communicate to other applications/processes that
they also need to use the master for any requests by that user.
The communication between applications is done via memcached.
I'm using Postgresql and Slony. I have not looked into implementing
lag_behind_master (as I'm not clear how that works in ::Replicated).
The existing (non-DBIC) application will set a flag in memcached when a
write happens. This is keyed by user id. And each request memcached is
checked to see if the current user needs to read from the master. I'm
looking at a way to duplicate that behavior with ::Replicated.
I'd like to know if this seems like a reasonable approach, and if anyone
sees any gotchas that I need to be aware of.
First, I subclass ::Replicated in Catalyst::Model::DBIC::Schema config via
storage_type =3D> 'MyApp::DB::Replicated'. The point of this subclass is t=
wo
things: 1) set a flag in memcached and 2) force all reads to the master for
the remainder of the request.
This subclass looks like:
package MyApp::DB::Replicated;
use Moose;
extends 'DBIx::Class::Storage::DBI::Replicated';
use namespace::autoclean;
has flag_write =3D> ( is =3D> 'rw' );
my @methods =3D qw/
insert
insert_bulk
update
delete
/;
after \@methods =3D> sub {
my $self =3D shift;
$self->flag_write->();
$self->set_reliable_storage;
};
__PACKAGE__->meta->make_immutable;
1;
So, after the methods listed a sub is called to flag (in memcached) that a
write happened, and then set_reliable_storage is forced on to make any
subsequent reads go to the master (for the remainder of the request).
Replicated will force all reads to the master for reads inside a
transaction, but a single request might span multiple transactions (and
selects outside of a txn_do), so forcing it for the reminder of the request
seems the best option.
Now, in the Catalyst Model I need a way to force reads to the master, and
also set memcached when a write to the master happens:
before 'ACCEPT_CONTEXT' =3D> sub {
my ( $self, $c ) =3D @_;
my $schema =3D $self->schema;
my $storage =3D $schema->storage;
return if $c->stash->{_replicated_set}++
|| !$storage->isa( 'DBIx::Class::Storage::DBI::Replicated' );
# callback to flag that reads go to master.
$storage->flag_write( sub { $self->force_master($c, 1) } );
# Should all reads go to master?
if ( $self->force_master( $c ) ) {
$storage->set_reliable_storage;
}
else {
$storage->set_balanced_storage;
}
};
That doesn't feel bullet proof by any means, but does this seem like a good
way to hook into DBIC for this?
BTW -- Do you think ::Replicated should load any class specified by
"storage_type"? I'm having to explicitly "use" my subclass.
Thanks,
-- =
Bill Moseley
moseley at hank.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.scsys.co.uk/pipermail/dbix-class/attachments/20100607/0b3=
20991/attachment-0001.htm
More information about the DBIx-Class
mailing list