[Bast-commits] r8704 - in DBIx-Class/0.08/branches/prefetch: .
lib/DBIx lib/DBIx/Class lib/DBIx/Class/Admin
lib/DBIx/Class/Manual lib/DBIx/Class/Optional
lib/DBIx/Class/Schema lib/DBIx/Class/Storage
lib/DBIx/Class/Storage/DBI
lib/DBIx/Class/Storage/DBI/Replicated script t t/admin
t/inflate t/lib/DBICTest/Schema t/resultset t/search t/storage
ribasushi at dev.catalyst.perl.org
ribasushi at dev.catalyst.perl.org
Sun Feb 14 10:46:08 GMT 2010
Author: ribasushi
Date: 2010-02-14 10:46:07 +0000 (Sun, 14 Feb 2010)
New Revision: 8704
Added:
DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class/Admin.pm
DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class/Admin/
DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class/Admin/Types.pm
DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class/Optional/
DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class/Optional/Dependencies.pm
DBIx-Class/0.08/branches/prefetch/t/10optional_deps.t
DBIx-Class/0.08/branches/prefetch/t/admin/
DBIx-Class/0.08/branches/prefetch/t/admin/01load.t
DBIx-Class/0.08/branches/prefetch/t/admin/02ddl.t
DBIx-Class/0.08/branches/prefetch/t/admin/03data.t
DBIx-Class/0.08/branches/prefetch/t/admin/10script.t
DBIx-Class/0.08/branches/prefetch/t/resultset/is_ordered.t
DBIx-Class/0.08/branches/prefetch/t/search/select_chains.t
DBIx-Class/0.08/branches/prefetch/t/storage/replicated.t
Removed:
DBIx-Class/0.08/branches/prefetch/t/89dbicadmin.t
DBIx-Class/0.08/branches/prefetch/t/storage/replication.t
Modified:
DBIx-Class/0.08/branches/prefetch/
DBIx-Class/0.08/branches/prefetch/Changes
DBIx-Class/0.08/branches/prefetch/Makefile.PL
DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class.pm
DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class/Manual/FAQ.pod
DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class/ResultSet.pm
DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class/ResultSetColumn.pm
DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class/Schema/Versioned.pm
DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class/Storage/DBI.pm
DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class/Storage/DBI/ODBC.pm
DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class/Storage/DBI/Replicated.pm
DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class/Storage/DBI/Replicated/Pool.pm
DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class/Storage/DBI/Replicated/Replicant.pm
DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class/Storage/DBI/SQLAnywhere.pm
DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class/Storage/DBI/SQLite.pm
DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class/UTF8Columns.pm
DBIx-Class/0.08/branches/prefetch/script/dbicadmin
DBIx-Class/0.08/branches/prefetch/t/03podcoverage.t
DBIx-Class/0.08/branches/prefetch/t/749sybase_asa.t
DBIx-Class/0.08/branches/prefetch/t/76select.t
DBIx-Class/0.08/branches/prefetch/t/86sqlt.t
DBIx-Class/0.08/branches/prefetch/t/94versioning.t
DBIx-Class/0.08/branches/prefetch/t/99dbic_sqlt_parser.t
DBIx-Class/0.08/branches/prefetch/t/inflate/datetime_sybase_asa.t
DBIx-Class/0.08/branches/prefetch/t/lib/DBICTest/Schema/ForceForeign.pm
Log:
r8598 at Thesaurus (orig r8585): ribasushi | 2010-02-08 12:48:31 +0100
Release 0.08118
r8600 at Thesaurus (orig r8587): ribasushi | 2010-02-08 12:52:33 +0100
Bump trunk version
r8606 at Thesaurus (orig r8593): ribasushi | 2010-02-08 16:16:44 +0100
cheaper lookup
r8609 at Thesaurus (orig r8596): ribasushi | 2010-02-10 12:40:37 +0100
Consolidate last_insert_id handling with a fallback-attempt on DBI::last_insert_id
r8614 at Thesaurus (orig r8601): caelum | 2010-02-10 21:29:51 +0100
workaround for Moose bug affecting Replicated storage
r8615 at Thesaurus (orig r8602): caelum | 2010-02-10 21:40:07 +0100
revert Moose bug workaround, bump Moose dep for Replicated to 0.98
r8616 at Thesaurus (orig r8603): caelum | 2010-02-10 22:48:34 +0100
add a couple proxy methods to Replicated so it can run
r8628 at Thesaurus (orig r8615): caelum | 2010-02-11 11:35:01 +0100
r21090 at hlagh (orig r7836): caelum | 2009-11-02 06:40:52 -0500
new branch to fix unhandled methods in Storage::DBI::Replicated
r21091 at hlagh (orig r7837): caelum | 2009-11-02 06:42:00 -0500
add test to display unhandled methods
r21092 at hlagh (orig r7838): caelum | 2009-11-02 06:55:34 -0500
minor fix to last committed test
r21093 at hlagh (orig r7839): caelum | 2009-11-02 09:26:00 -0500
minor test code cleanup
r23125 at hlagh (orig r8607): caelum | 2010-02-10 19:25:51 -0500
add unimplemented Storage::DBI methods to ::DBI::Replicated
r23130 at hlagh (orig r8612): ribasushi | 2010-02-11 05:12:48 -0500
Podtesting exclusion
r8630 at Thesaurus (orig r8617): frew | 2010-02-11 11:45:54 +0100
Changes (from a while ago)
r8631 at Thesaurus (orig r8618): caelum | 2010-02-11 11:46:58 +0100
savepoints for SQLAnywhere
r8640 at Thesaurus (orig r8627): ribasushi | 2010-02-11 12:33:19 +0100
r8424 at Thesaurus (orig r8411): ribasushi | 2010-01-22 11:19:40 +0100
Chaining POC test
r8641 at Thesaurus (orig r8628): ribasushi | 2010-02-11 12:34:19 +0100
r8426 at Thesaurus (orig r8413): ribasushi | 2010-01-22 11:35:15 +0100
Moev failing regression test away from trunk
r8642 at Thesaurus (orig r8629): ribasushi | 2010-02-11 12:34:56 +0100
r8643 at Thesaurus (orig r8630): ribasushi | 2010-02-11 12:35:03 +0100
r8507 at Thesaurus (orig r8494): frew | 2010-02-01 04:33:08 +0100
small refactor to put select/as/+select/+as etc merging in it's own function
r8644 at Thesaurus (orig r8631): ribasushi | 2010-02-11 12:35:11 +0100
r8514 at Thesaurus (orig r8501): frew | 2010-02-02 05:12:29 +0100
revert actual changes from yesterday as per ribasushis advice
r8645 at Thesaurus (orig r8632): ribasushi | 2010-02-11 12:35:16 +0100
r8522 at Thesaurus (orig r8509): frew | 2010-02-02 19:39:33 +0100
delete +stuff if stuff exists
r8646 at Thesaurus (orig r8633): ribasushi | 2010-02-11 12:35:23 +0100
r8534 at Thesaurus (orig r8521): frew | 2010-02-03 06:14:44 +0100
change deletion/overriding to fix t/76
r8647 at Thesaurus (orig r8634): ribasushi | 2010-02-11 12:35:30 +0100
r8535 at Thesaurus (orig r8522): frew | 2010-02-03 06:57:15 +0100
some basic readability factorings (aka, fewer nested ternaries and long maps)
r8648 at Thesaurus (orig r8635): ribasushi | 2010-02-11 12:36:01 +0100
r8558 at Thesaurus (orig r8545): frew | 2010-02-04 20:32:54 +0100
fix incorrect test in t/76select.t and posit an incorrect solution
r8649 at Thesaurus (orig r8636): ribasushi | 2010-02-11 12:38:47 +0100
r8650 at Thesaurus (orig r8637): ribasushi | 2010-02-11 12:38:57 +0100
r8578 at Thesaurus (orig r8565): ribasushi | 2010-02-05 19:11:09 +0100
Should not be needed
r8651 at Thesaurus (orig r8638): ribasushi | 2010-02-11 12:39:03 +0100
r8579 at Thesaurus (orig r8566): ribasushi | 2010-02-05 19:13:24 +0100
SQLA now fixed
r8652 at Thesaurus (orig r8639): ribasushi | 2010-02-11 12:39:10 +0100
r8624 at Thesaurus (orig r8611): ribasushi | 2010-02-11 10:31:08 +0100
MOAR testing
r8653 at Thesaurus (orig r8640): ribasushi | 2010-02-11 12:39:17 +0100
r8626 at Thesaurus (orig r8613): frew | 2010-02-11 11:16:30 +0100
fix bad test
r8654 at Thesaurus (orig r8641): ribasushi | 2010-02-11 12:39:23 +0100
r8627 at Thesaurus (orig r8614): frew | 2010-02-11 11:21:52 +0100
fix t/76, break rsc tests
r8655 at Thesaurus (orig r8642): ribasushi | 2010-02-11 12:39:30 +0100
r8632 at Thesaurus (orig r8619): frew | 2010-02-11 11:53:50 +0100
fix incorrect test
r8656 at Thesaurus (orig r8643): ribasushi | 2010-02-11 12:39:35 +0100
r8633 at Thesaurus (orig r8620): frew | 2010-02-11 11:54:49 +0100
make t/76s and t/88 pass by deleting from the correct attr hash
r8657 at Thesaurus (orig r8644): ribasushi | 2010-02-11 12:39:40 +0100
r8634 at Thesaurus (orig r8621): frew | 2010-02-11 11:55:41 +0100
fix a test due to ordering issues
r8658 at Thesaurus (orig r8645): ribasushi | 2010-02-11 12:39:45 +0100
r8635 at Thesaurus (orig r8622): frew | 2010-02-11 11:58:23 +0100
this is why you run tests before you commit them.
r8659 at Thesaurus (orig r8646): ribasushi | 2010-02-11 12:39:51 +0100
r8636 at Thesaurus (orig r8623): frew | 2010-02-11 12:00:59 +0100
fix another ordering issue
r8660 at Thesaurus (orig r8647): ribasushi | 2010-02-11 12:39:57 +0100
r8637 at Thesaurus (orig r8624): frew | 2010-02-11 12:11:31 +0100
fix for search/select_chains
r8661 at Thesaurus (orig r8648): ribasushi | 2010-02-11 12:40:03 +0100
r8662 at Thesaurus (orig r8649): caelum | 2010-02-11 12:40:07 +0100
test nanosecond precision for SQLAnywhere
r8663 at Thesaurus (orig r8650): ribasushi | 2010-02-11 12:40:09 +0100
r8639 at Thesaurus (orig r8626): ribasushi | 2010-02-11 12:33:03 +0100
Changes and small ommission
r8666 at Thesaurus (orig r8653): ribasushi | 2010-02-11 18:16:45 +0100
Changes
r8674 at Thesaurus (orig r8661): ribasushi | 2010-02-12 09:12:45 +0100
Fix moose dep
r8680 at Thesaurus (orig r8667): dew | 2010-02-12 18:05:11 +0100
Add is_ordered to DBIC::ResultSet
r8688 at Thesaurus (orig r8675): ribasushi | 2010-02-13 09:36:29 +0100
r8667 at Thesaurus (orig r8654): ribasushi | 2010-02-11 18:17:35 +0100
Try a dep-handling idea
r8675 at Thesaurus (orig r8662): ribasushi | 2010-02-12 12:46:11 +0100
Move optional deps out of the Makefile
r8676 at Thesaurus (orig r8663): ribasushi | 2010-02-12 13:40:53 +0100
Support methods to verify group dependencies
r8677 at Thesaurus (orig r8664): ribasushi | 2010-02-12 13:45:18 +0100
Move sqlt dephandling to Optional::Deps
r8679 at Thesaurus (orig r8666): ribasushi | 2010-02-12 14:03:17 +0100
Move replicated to Opt::Deps
r8684 at Thesaurus (orig r8671): ribasushi | 2010-02-13 02:47:52 +0100
Auto-POD for Optional Deps
r8685 at Thesaurus (orig r8672): ribasushi | 2010-02-13 02:53:20 +0100
Privatize the full list method
r8686 at Thesaurus (orig r8673): ribasushi | 2010-02-13 02:59:51 +0100
Scary warning
r8687 at Thesaurus (orig r8674): ribasushi | 2010-02-13 09:35:01 +0100
Changes
r8691 at Thesaurus (orig r8678): ribasushi | 2010-02-13 10:07:15 +0100
Autogen comment for Dependencies.pod
r8692 at Thesaurus (orig r8679): ribasushi | 2010-02-13 10:11:24 +0100
Ask for newer M::I
r8698 at Thesaurus (orig r8685): ribasushi | 2010-02-13 11:11:10 +0100
Add author/license to pod
r8699 at Thesaurus (orig r8686): arcanez | 2010-02-13 13:43:22 +0100
fix typo per nuba on irc
r8705 at Thesaurus (orig r8692): ribasushi | 2010-02-13 15:15:33 +0100
r8001 at Thesaurus (orig r7989): goraxe | 2009-11-30 01:14:47 +0100
Branch for dbicadmin script refactor
r8003 at Thesaurus (orig r7991): goraxe | 2009-11-30 01:26:39 +0100
add DBIx::Class::Admin
r8024 at Thesaurus (orig r8012): goraxe | 2009-12-02 22:49:27 +0100
get deployment tests to pass
r8025 at Thesaurus (orig r8013): goraxe | 2009-12-02 22:50:42 +0100
get deployment tests to pass
r8026 at Thesaurus (orig r8014): goraxe | 2009-12-02 23:52:40 +0100
all ddl tests now pass
r8083 at Thesaurus (orig r8071): goraxe | 2009-12-12 17:01:11 +0100
add quite attribute to DBIx::Class admin
r8086 at Thesaurus (orig r8074): goraxe | 2009-12-12 17:36:58 +0100
add tests for data manipulation ported from 89dbicadmin.t
r8088 at Thesaurus (orig r8076): goraxe | 2009-12-12 17:38:07 +0100
add sleep 1 to t/admin/02ddl.t so insert into upgrade table does not happen too quickly
r8089 at Thesaurus (orig r8077): goraxe | 2009-12-12 17:40:33 +0100
update DBIx::Class::Admin data manip functions to pass the test
r8095 at Thesaurus (orig r8083): goraxe | 2009-12-12 19:36:22 +0100
change passing of preversion to be a parameter
r8096 at Thesaurus (orig r8084): goraxe | 2009-12-12 19:38:26 +0100
add some pod to DBIx::Class::Admin
r8103 at Thesaurus (orig r8091): goraxe | 2009-12-12 22:08:55 +0100
some changes to make DBIx::Class::Admin more compatible with dbicadmin interface
r8104 at Thesaurus (orig r8092): goraxe | 2009-12-12 22:09:39 +0100
commit refactored dbicadmin script and very minor changes to its existing test suite
r8107 at Thesaurus (orig r8095): goraxe | 2009-12-12 22:34:35 +0100
add compatability for --op for dbicadmin, revert test suite
r8127 at Thesaurus (orig r8115): goraxe | 2009-12-15 22:14:20 +0100
dep check to end of module
r8128 at Thesaurus (orig r8116): goraxe | 2009-12-15 23:15:25 +0100
add namespace::autoclean to DBIx::Class::Admin
r8129 at Thesaurus (orig r8117): goraxe | 2009-12-15 23:16:00 +0100
update test suite to skip if cannot load DBIx::Class::Admin
r8130 at Thesaurus (orig r8118): goraxe | 2009-12-15 23:18:35 +0100
add deps check for 89dbicadmin.t
r8131 at Thesaurus (orig r8119): goraxe | 2009-12-15 23:19:01 +0100
include deps for dbicadmin DBIx::Class::Admin to Makefile.PL
r8149 at Thesaurus (orig r8137): goraxe | 2009-12-17 23:21:50 +0100
use DBICTest::_database over creating a schema object to steal conn info
r8338 at Thesaurus (orig r8326): goraxe | 2010-01-15 19:00:17 +0100
change white space to not be tabs
r8339 at Thesaurus (orig r8327): goraxe | 2010-01-15 19:10:42 +0100
remove Module::Load from test suite
r8358 at Thesaurus (orig r8346): ribasushi | 2010-01-17 17:52:10 +0100
Real detabify
r8359 at Thesaurus (orig r8347): ribasushi | 2010-01-17 18:01:53 +0100
Fix POD (spacing matters)
r8360 at Thesaurus (orig r8348): ribasushi | 2010-01-17 21:57:53 +0100
More detabification
r8361 at Thesaurus (orig r8349): ribasushi | 2010-01-17 22:33:12 +0100
Test cleanup
r8362 at Thesaurus (orig r8350): ribasushi | 2010-01-17 22:41:11 +0100
More tets cleanup
r8363 at Thesaurus (orig r8351): ribasushi | 2010-01-17 22:43:57 +0100
And more cleanup
r8364 at Thesaurus (orig r8352): ribasushi | 2010-01-17 22:51:21 +0100
Disallow mucking with INC
r8365 at Thesaurus (orig r8353): ribasushi | 2010-01-17 23:23:15 +0100
More cleanup
r8366 at Thesaurus (orig r8354): ribasushi | 2010-01-17 23:27:49 +0100
Add lib path to ENV so that $^X can see it
r8367 at Thesaurus (orig r8355): ribasushi | 2010-01-17 23:33:10 +0100
Move script-test
r8368 at Thesaurus (orig r8356): goraxe | 2010-01-17 23:35:03 +0100
change warns/dies -> carp/throw_exception
r8369 at Thesaurus (orig r8357): goraxe | 2010-01-17 23:53:54 +0100
add goraxe to contributors
r8370 at Thesaurus (orig r8358): goraxe | 2010-01-17 23:54:15 +0100
remove comment headers
r8404 at Thesaurus (orig r8391): caelum | 2010-01-20 20:54:29 +0100
minor fixups
r8405 at Thesaurus (orig r8392): goraxe | 2010-01-20 21:13:24 +0100
add private types to coerce
r8406 at Thesaurus (orig r8393): goraxe | 2010-01-20 21:17:19 +0100
remove un-needed coerce from schema_class of type Str
r8411 at Thesaurus (orig r8398): caelum | 2010-01-21 23:36:25 +0100
minor documentation updates
r8436 at Thesaurus (orig r8423): caelum | 2010-01-25 02:56:30 +0100
this code never runs anyway
r8440 at Thesaurus (orig r8427): caelum | 2010-01-26 14:05:53 +0100
prefer JSON::DWIW for barekey support
r8693 at Thesaurus (orig r8680): ribasushi | 2010-02-13 10:27:18 +0100
dbicadmin dependencies
r8694 at Thesaurus (orig r8681): ribasushi | 2010-02-13 10:28:04 +0100
Some cleaup, make use of Text::CSV
r8695 at Thesaurus (orig r8682): ribasushi | 2010-02-13 10:34:19 +0100
We use Try::Tiny in a single spot, not grounds for inlusion in deps
r8696 at Thesaurus (orig r8683): ribasushi | 2010-02-13 10:37:30 +0100
POD section
r8697 at Thesaurus (orig r8684): ribasushi | 2010-02-13 11:05:17 +0100
Switch tests to Optional::Deps
r8700 at Thesaurus (orig r8687): ribasushi | 2010-02-13 14:32:50 +0100
Switch Admin/dbicadmin to Opt::Deps
r8702 at Thesaurus (orig r8689): ribasushi | 2010-02-13 14:39:24 +0100
JSON dep is needed for Admin.pm itself
r8703 at Thesaurus (orig r8690): ribasushi | 2010-02-13 15:06:28 +0100
Test fixes
r8704 at Thesaurus (orig r8691): ribasushi | 2010-02-13 15:13:31 +0100
Changes
r8707 at Thesaurus (orig r8694): ribasushi | 2010-02-13 16:37:57 +0100
Test for optional deps manager
r8710 at Thesaurus (orig r8697): caelum | 2010-02-14 05:22:03 +0100
add doc on maximum cursors for SQLAnywhere
r8711 at Thesaurus (orig r8698): ribasushi | 2010-02-14 09:23:09 +0100
Cleanup dependencies / Admin inheritance
r8712 at Thesaurus (orig r8699): ribasushi | 2010-02-14 09:28:29 +0100
Some formatting
r8715 at Thesaurus (orig r8702): ribasushi | 2010-02-14 10:46:51 +0100
This is Moose, so use CMOP
Property changes on: DBIx-Class/0.08/branches/prefetch
___________________________________________________________________
Name: svk:merge
- 168d5346-440b-0410-b799-f706be625ff1:/DBIx-Class-current:2207
462d4d0c-b505-0410-bf8e-ce8f877b3390:/local/bast/DBIx-Class:3159
4d5fae46-8e6a-4e08-abee-817e9fb894a2:/local/bast/DBIx-Class/0.08/branches/cookbook_fixes:7657
4d5fae46-8e6a-4e08-abee-817e9fb894a2:/local/bast/DBIx-Class/0.08/branches/prefetch_bug-unqualified_column_in_search_related_cond:7959
4d5fae46-8e6a-4e08-abee-817e9fb894a2:/local/bast/DBIx-Class/0.08/branches/resultsetcolumn_custom_columns:5160
4d5fae46-8e6a-4e08-abee-817e9fb894a2:/local/bast/DBIx-Class/0.08/branches/sqla_1.50_compat:5414
4d5fae46-8e6a-4e08-abee-817e9fb894a2:/local/bast/DBIx-Class/0.08/branches/void_populate_resultset_cond:7935
4d5fae46-8e6a-4e08-abee-817e9fb894a2:/local/bast/DBIx-Class/0.08/trunk:7982
9c88509d-e914-0410-b01c-b9530614cbfe:/local/DBIx-Class:32260
9c88509d-e914-0410-b01c-b9530614cbfe:/local/DBIx-Class-CDBICompat:54993
9c88509d-e914-0410-b01c-b9530614cbfe:/vendor/DBIx-Class:31122
ab17426e-7cd3-4704-a2a2-80b7c0a611bb:/local/dbic_column_attr:10946
ab17426e-7cd3-4704-a2a2-80b7c0a611bb:/local/dbic_trunk:11788
bd5ac9a7-f185-4d95-9186-dbb8b392a572:/local/os/bast/DBIx-Class/0.08/trunk:2798
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/_abandoned_but_possibly_useful/table_name_ref:7266
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/ado_mssql:7886
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/autocast:7418
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/belongs_to_null_col_fix:5244
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/cdbicompat_integration:4160
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/column_attr:5074
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/complex_join_rels:4589
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/connect_info_hash:7435
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/connected_schema_leak:8264
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/cookbook_fixes:7479
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/count_distinct:6218
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/count_rs:6741
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/create_scalarref_rt51559:8027
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/diamond_relationships:6310
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/discard_changes_replication_fix:7252
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/file_column:3920
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/fix-update-and-delete-as_query:6162
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/get_inflated_columns_rt46953:7964
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/grouped_has_many_join:7382
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/grouped_prefetch:6885
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/ic_dt_post_inflate:8517
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/informix:8434
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/is_resultset_paginated:7769
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/joined_count:6323
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/mc_fixes:6645
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/mssql_limit_regression:8278
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/mssql_money_type:7096
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/mssql_rno_pagination:8054
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/mssql_storage_minor_refactor:7210
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/mssql_top_fixes:6971
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/multi_stuff:5565
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/multicreate_fixes:7275
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/multiple_version_upgrade:8429
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/mysql_ansi:7175
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/mystery_join:6589
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/new_replication_transaction_fixup:7058
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/no_duplicate_indexes_for_pk_cols:8373
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/normalize_connect_info:8274
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/null_column_regression:8314
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/on_connect_call:6854
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/on_disconnect_do:3694
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/oracle-tweaks:6222
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/oracle_sequence:4173
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/oracle_shorten_aliases:8234
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/order_by_refactor:6475
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/parser_fk_index:4485
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/pg_unqualified_schema:7842
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/prefetch-group_by:7917
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/prefetch_bug-unqualified_column_in_search_related_cond:7900
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/prefetch_limit:6724
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/prefetch_pager:8431
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/prefetch_redux:7206
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/reduce_pings:7261
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/replication_dedux:4600
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/rsrc_in_storage:6577
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/rt_bug_41083:5437
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/savepoints:4223
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/search_related_prefetch:6818
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/sqla_1.50_compat:5321
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/sqlt_parser_view:8145
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/storage-ms-access:4142
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/storage-tweaks:6262
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/subclassed_rsset:5930
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/subquery:5617
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/syb_connected:6919
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/sybase:7682
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/sybase_asa:8513
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/sybase_bulk_insert:7679
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/sybase_bulkinsert_support:7796
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/sybase_computed_columns:8496
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/sybase_mssql:6125
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/sybase_refactor:7940
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/sybase_support:7797
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/table_name_ref:7132
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/top_limit_altfix:6429
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/type_aware_update:6619
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/unresolvable_prefetch:6949
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/versioned_enhancements:4125
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/versioning:4578
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/view_rels:7908
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/views:5585
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/void_populate_resultset_cond:7944
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/tags/0.08108_prerelease_please_do_not_pull_into_it:7008
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/tags/pre_0.08109_please_do_not_merge:7336
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/trunk:8583
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/branches/DBIx-Class-C3:318
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/branches/DBIx-Class-current:2222
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/branches/DBIx-Class-joins:173
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/branches/DBIx-Class-resultset:570
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/branches/DBIx-Class/datetime:1716
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/branches/DBIx-Class/find_compat:1855
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/branches/DBIx-Class/find_unique_query_fixes:2142
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/branches/DBIx-Class/inflate:1988
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/branches/DBIx-Class/many_to_many:2025
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/branches/DBIx-Class/re_refactor_bugfix:1944
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/branches/DBIx-Class/reorganize_tests:1827
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/branches/DBIx-Class/resultset-new-refactor:1766
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/branches/DBIx-Class/resultset_2_electric_boogaloo:2175
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/branches/DBIx-Class/resultset_cleanup:2102
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/branches/DBIx-Class/sqlt_tests_refactor:2043
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/trunk/DBIx-Class:3606
fe160bb6-dc1c-0410-9f2b-d64a711b54a5:/local/DBIC-trunk-0.08:10510
+ 168d5346-440b-0410-b799-f706be625ff1:/DBIx-Class-current:2207
462d4d0c-b505-0410-bf8e-ce8f877b3390:/local/bast/DBIx-Class:3159
4d5fae46-8e6a-4e08-abee-817e9fb894a2:/local/bast/DBIx-Class/0.08/branches/cookbook_fixes:7657
4d5fae46-8e6a-4e08-abee-817e9fb894a2:/local/bast/DBIx-Class/0.08/branches/prefetch_bug-unqualified_column_in_search_related_cond:7959
4d5fae46-8e6a-4e08-abee-817e9fb894a2:/local/bast/DBIx-Class/0.08/branches/resultsetcolumn_custom_columns:5160
4d5fae46-8e6a-4e08-abee-817e9fb894a2:/local/bast/DBIx-Class/0.08/branches/sqla_1.50_compat:5414
4d5fae46-8e6a-4e08-abee-817e9fb894a2:/local/bast/DBIx-Class/0.08/branches/void_populate_resultset_cond:7935
4d5fae46-8e6a-4e08-abee-817e9fb894a2:/local/bast/DBIx-Class/0.08/trunk:7982
9c88509d-e914-0410-b01c-b9530614cbfe:/local/DBIx-Class:32260
9c88509d-e914-0410-b01c-b9530614cbfe:/local/DBIx-Class-CDBICompat:54993
9c88509d-e914-0410-b01c-b9530614cbfe:/vendor/DBIx-Class:31122
ab17426e-7cd3-4704-a2a2-80b7c0a611bb:/local/dbic_column_attr:10946
ab17426e-7cd3-4704-a2a2-80b7c0a611bb:/local/dbic_trunk:11788
bd5ac9a7-f185-4d95-9186-dbb8b392a572:/local/os/bast/DBIx-Class/0.08/trunk:2798
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/_abandoned_but_possibly_useful/table_name_ref:7266
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/ado_mssql:7886
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/autocast:7418
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/belongs_to_null_col_fix:5244
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/cdbicompat_integration:4160
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/chaining_fixes:8626
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/column_attr:5074
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/complex_join_rels:4589
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/connect_info_hash:7435
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/connected_schema_leak:8264
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/cookbook_fixes:7479
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/count_distinct:6218
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/count_rs:6741
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/create_scalarref_rt51559:8027
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/dbicadmin_refactor:8691
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/dephandling:8674
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/diamond_relationships:6310
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/discard_changes_replication_fix:7252
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/file_column:3920
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/fix-update-and-delete-as_query:6162
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/get_inflated_columns_rt46953:7964
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/grouped_has_many_join:7382
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/grouped_prefetch:6885
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/handle_all_storage_methods_in_replicated:8612
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/ic_dt_post_inflate:8517
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/informix:8434
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/is_resultset_paginated:7769
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/joined_count:6323
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/mc_fixes:6645
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/mssql_limit_regression:8278
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/mssql_money_type:7096
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/mssql_rno_pagination:8054
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/mssql_storage_minor_refactor:7210
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/mssql_top_fixes:6971
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/multi_stuff:5565
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/multicreate_fixes:7275
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/multiple_version_upgrade:8429
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/mysql_ansi:7175
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/mystery_join:6589
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/new_replication_transaction_fixup:7058
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/no_duplicate_indexes_for_pk_cols:8373
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/normalize_connect_info:8274
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/null_column_regression:8314
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/on_connect_call:6854
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/on_disconnect_do:3694
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/oracle-tweaks:6222
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/oracle_sequence:4173
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/oracle_shorten_aliases:8234
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/order_by_refactor:6475
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/parser_fk_index:4485
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/pg_unqualified_schema:7842
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/prefetch-group_by:7917
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/prefetch_bug-unqualified_column_in_search_related_cond:7900
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/prefetch_limit:6724
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/prefetch_pager:8431
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/prefetch_redux:7206
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/reduce_pings:7261
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/replication_dedux:4600
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/rsrc_in_storage:6577
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/rt_bug_41083:5437
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/savepoints:4223
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/search_related_prefetch:6818
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/sqla_1.50_compat:5321
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/sqlt_parser_view:8145
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/storage-ms-access:4142
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/storage-tweaks:6262
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/subclassed_rsset:5930
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/subquery:5617
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/syb_connected:6919
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/sybase:7682
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/sybase_asa:8513
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/sybase_bulk_insert:7679
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/sybase_bulkinsert_support:7796
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/sybase_computed_columns:8496
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/sybase_mssql:6125
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/sybase_refactor:7940
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/sybase_support:7797
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/table_name_ref:7132
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/top_limit_altfix:6429
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/type_aware_update:6619
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/unresolvable_prefetch:6949
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/versioned_enhancements:4125
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/versioning:4578
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/view_rels:7908
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/views:5585
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/void_populate_resultset_cond:7944
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/tags/0.08108_prerelease_please_do_not_pull_into_it:7008
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/tags/pre_0.08109_please_do_not_merge:7336
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/trunk:8702
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/branches/DBIx-Class-C3:318
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/branches/DBIx-Class-current:2222
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/branches/DBIx-Class-joins:173
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/branches/DBIx-Class-resultset:570
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/branches/DBIx-Class/datetime:1716
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/branches/DBIx-Class/find_compat:1855
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/branches/DBIx-Class/find_unique_query_fixes:2142
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/branches/DBIx-Class/inflate:1988
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/branches/DBIx-Class/many_to_many:2025
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/branches/DBIx-Class/re_refactor_bugfix:1944
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/branches/DBIx-Class/reorganize_tests:1827
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/branches/DBIx-Class/resultset-new-refactor:1766
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/branches/DBIx-Class/resultset_2_electric_boogaloo:2175
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/branches/DBIx-Class/resultset_cleanup:2102
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/branches/DBIx-Class/sqlt_tests_refactor:2043
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/trunk/DBIx-Class:3606
fe160bb6-dc1c-0410-9f2b-d64a711b54a5:/local/DBIC-trunk-0.08:10510
Modified: DBIx-Class/0.08/branches/prefetch/Changes
===================================================================
--- DBIx-Class/0.08/branches/prefetch/Changes 2010-02-14 10:41:14 UTC (rev 8703)
+++ DBIx-Class/0.08/branches/prefetch/Changes 2010-02-14 10:46:07 UTC (rev 8704)
@@ -1,5 +1,16 @@
Revision history for DBIx::Class
+ - Add $rs->is_ordered to test for existing order_by on a resultset
+ - Add as_subselect_rs to DBIC::ResultSet from
+ DBIC::Helper::ResultSet::VirtualView::as_virtual_view
+ - Refactor dbicadmin adding DDL manipulation capabilities
+ - New optional dependency manager to aid extension writers
+ - Depend on newest bugfixed Moose
+ - Make resultset chaining consistent wrt selection specification
+ - Storage::DBI::Replicated cleanup
+ - Fix autoinc PKs without an autoinc flag on Sybase ASA
+
+0.08118 2010-02-08 11:53:00 (UTC)
- Fix a bug causing UTF8 columns not to be decoded (RT#54395)
- Fix bug in One->Many->One prefetch-collapse handling (RT#54039)
- Cleanup handling of relationship accessor types
Modified: DBIx-Class/0.08/branches/prefetch/Makefile.PL
===================================================================
--- DBIx-Class/0.08/branches/prefetch/Makefile.PL 2010-02-14 10:41:14 UTC (rev 8703)
+++ DBIx-Class/0.08/branches/prefetch/Makefile.PL 2010-02-14 10:46:07 UTC (rev 8704)
@@ -1,141 +1,124 @@
-use inc::Module::Install 0.89;
+use inc::Module::Install 0.93;
use strict;
use warnings;
use POSIX ();
use 5.008001;
-# ****** DO NOT ADD OPTIONAL DEPENDENCIES. EVER. --mst ******
+use FindBin;
+use lib "$FindBin::Bin/lib";
+###
+### DO NOT ADD OPTIONAL DEPENDENCIES HERE, EVEN AS recommends()
+### All of them should go to DBIx::Class::Optional::Dependencies
+###
+
+
name 'DBIx-Class';
perl_version '5.008001';
all_from 'lib/DBIx/Class.pm';
+my $build_requires = {
+ 'DBD::SQLite' => '1.25',
+};
-test_requires 'Test::Builder' => '0.33';
-test_requires 'Test::Deep' => '0';
-test_requires 'Test::Exception' => '0';
-test_requires 'Test::More' => '0.92';
-test_requires 'Test::Warn' => '0.21';
+my $test_requires = {
+ 'File::Temp' => '0.22',
+ 'Test::Builder' => '0.33',
+ 'Test::Deep' => '0',
+ 'Test::Exception' => '0',
+ 'Test::More' => '0.92',
+ 'Test::Warn' => '0.21',
+};
-test_requires 'File::Temp' => '0.22';
+my $runtime_requires = {
+ # Core
+ 'List::Util' => '0',
+ 'Scalar::Util' => '0',
+ 'Storable' => '0',
+ # Dependencies
+ 'Carp::Clan' => '6.0',
+ 'Class::Accessor::Grouped' => '0.09002',
+ 'Class::C3::Componentised' => '1.0005',
+ 'Class::Inspector' => '1.24',
+ 'Data::Page' => '2.00',
+ 'DBI' => '1.609',
+ 'MRO::Compat' => '0.09',
+ 'Module::Find' => '0.06',
+ 'Path::Class' => '0.18',
+ 'Scope::Guard' => '0.03',
+ 'SQL::Abstract' => '1.61',
+ 'SQL::Abstract::Limit' => '0.13',
+ 'Sub::Name' => '0.04',
+ 'Data::Dumper::Concise' => '1.000',
+};
-# Core
-requires 'List::Util' => '0';
-requires 'Scalar::Util' => '0';
-requires 'Storable' => '0';
+# this is so we can order requires alphabetically
+# copies are needed for author requires injection
+my $reqs = {
+ build_requires => { %$build_requires },
+ requires => { %$runtime_requires },
+ test_requires => { %$test_requires },
+};
-# Dependencies (keep in alphabetical order)
-requires 'Carp::Clan' => '6.0';
-requires 'Class::Accessor::Grouped' => '0.09002';
-requires 'Class::C3::Componentised' => '1.0005';
-requires 'Class::Inspector' => '1.24';
-requires 'Data::Page' => '2.00';
-requires 'DBD::SQLite' => '1.25';
-requires 'DBI' => '1.609';
-requires 'JSON::Any' => '1.18';
-requires 'MRO::Compat' => '0.09';
-requires 'Module::Find' => '0.06';
-requires 'Path::Class' => '0.16';
-requires 'Scope::Guard' => '0.03';
-requires 'SQL::Abstract' => '1.61';
-requires 'SQL::Abstract::Limit' => '0.13';
-requires 'Sub::Name' => '0.04';
-requires 'Data::Dumper::Concise' => '1.000';
+# re-build README and require extra modules for testing if we're in a checkout
+if ($Module::Install::AUTHOR) {
-my %replication_requires = (
- 'Moose', => '0.90',
- 'MooseX::Types', => '0.21',
- 'namespace::clean' => '0.11',
- 'Hash::Merge', => '0.11',
-);
+ print "Regenerating README\n";
+ system('pod2text lib/DBIx/Class.pm > README');
-#************************************************************************#
-# Make *ABSOLUTELY SURE* that nothing on this list is a real require, #
-# since every module listed in %force_requires_if_author is deleted #
-# from the final META.yml (thus will never make it as a CPAN dependency) #
-#************************************************************************#
-my %force_requires_if_author = (
- %replication_requires,
+ if (-f 'MANIFEST') {
+ print "Removing MANIFEST\n";
+ unlink 'MANIFEST';
+ }
- # when changing also adjust $DBIx::Class::Storage::DBI::minimum_sqlt_version
- 'SQL::Translator' => '0.11002',
+ print "Regenerating Optional/Dependencies.pod\n";
+ require DBIx::Class::Optional::Dependencies;
+ DBIx::Class::Optional::Dependencies->_gen_pod;
-# 'Module::Install::Pod::Inherit' => '0.01',
+# FIXME Disabled due to unsolved issues, ask theorbtwo
+# require Module::Install::Pod::Inherit;
+# PodInherit();
- # when changing also adjust version in t/02pod.t
- 'Test::Pod' => '1.26',
+ warn <<'EOW';
+******************************************************************************
+******************************************************************************
+*** ***
+*** AUTHOR MODE: all optional test dependencies converted to hard requires ***
+*** ***
+******************************************************************************
+******************************************************************************
- # when changing also adjust version in t/06notabs.t
-# 'Test::NoTabs' => '0.9',
+EOW
- # when changing also adjust version in t/07eol.t
-# 'Test::EOL' => '0.6',
+ $reqs->{test_requires} = {
+ %{$reqs->{test_requires}},
+ %{DBIx::Class::Optional::Dependencies->_all_optional_requirements},
+ };
+}
- # when changing also adjust version in t/03podcoverage.t
- 'Test::Pod::Coverage' => '1.08',
- 'Pod::Coverage' => '0.20',
+# compose final req list, for alphabetical ordering
+my %final_req;
+for my $rtype (keys %$reqs) {
+ for my $mod (keys %{$reqs->{$rtype}} ) {
- # CDBI-compat related
- 'DBIx::ContextualFetch' => '0',
- 'Class::DBI::Plugin::DeepAbstractSearch' => '0',
- 'Class::Trigger' => '0',
- 'Time::Piece::MySQL' => '0',
- 'Clone' => '0',
- 'Date::Simple' => '3.03',
+ # sanity check req duplications
+ if ($final_req{$mod}) {
+ die "$mod specified as both a '$rtype' and a '$final_req{$mod}[0]'\n";
+ }
- # t/52cycle.t
- 'Test::Memory::Cycle' => '0',
- 'Devel::Cycle' => '1.10',
+ $final_req{$mod} = [ $rtype, $reqs->{$rtype}{$mod}||0 ],
+ }
+}
- # t/36datetime.t
- # t/60core.t
- 'DateTime::Format::SQLite' => '0',
+# actual require
+for my $mod (sort keys %final_req) {
+ my ($rtype, $ver) = @{$final_req{$mod}};
+ no strict 'refs';
+ $rtype->($mod, $ver);
+}
- # t/96_is_deteministic_value.t
- 'DateTime::Format::Strptime'=> '0',
-
- # database-dependent reqs
- #
- $ENV{DBICTEST_PG_DSN}
- ? (
- 'Sys::SigAction' => '0',
- 'DBD::Pg' => '2.009002',
- 'DateTime::Format::Pg' => '0',
- ) : ()
- ,
-
- $ENV{DBICTEST_MYSQL_DSN}
- ? (
- 'DateTime::Format::MySQL' => '0',
- ) : ()
- ,
-
- $ENV{DBICTEST_ORA_DSN}
- ? (
- 'DateTime::Format::Oracle' => '0',
- ) : ()
- ,
-
- $ENV{DBICTEST_SYBASE_DSN}
- ? (
- 'DateTime::Format::Sybase' => 0,
- ) : ()
- ,
- grep $_, @ENV{qw/DBICTEST_SYBASE_ASA_DSN DBICTEST_SYBASE_ASA_ODBC_DSN/}
- ? (
- 'DateTime::Format::Strptime' => 0,
- ) : ()
- ,
-);
-#************************************************************************#
-# Make ABSOLUTELY SURE that nothing on the list above is a real require, #
-# since every module listed in %force_requires_if_author is deleted #
-# from the final META.yml (thus will never make it as a CPAN dependency) #
-#************************************************************************#
-
-
install_script (qw|
script/dbicadmin
|);
@@ -159,46 +142,22 @@
DBIx::Class::SQLAHacks DBIx::Class::Storage::DBIHacks
/);
-# re-build README and require extra modules for testing if we're in a checkout
-if ($Module::Install::AUTHOR) {
- warn <<'EOW';
-******************************************************************************
-******************************************************************************
-*** ***
-*** AUTHOR MODE: all optional test dependencies converted to hard requires ***
-*** ***
-******************************************************************************
-******************************************************************************
-
-EOW
-
- foreach my $module (sort keys %force_requires_if_author) {
- build_requires ($module => $force_requires_if_author{$module});
- }
-
- print "Regenerating README\n";
- system('pod2text lib/DBIx/Class.pm > README');
-
- if (-f 'MANIFEST') {
- print "Removing MANIFEST\n";
- unlink 'MANIFEST';
- }
-
-# require Module::Install::Pod::Inherit;
-# PodInherit();
-}
-
auto_install();
WriteAll();
+
# Re-write META.yml to _exclude_ all forced requires (we do not want to ship this)
if ($Module::Install::AUTHOR) {
+ # FIXME test_requires is not yet part of META
+ my %original_build_requires = ( %$build_requires, %$test_requires );
+
+ print "Regenerating META with author requires excluded\n";
Meta->{values}{build_requires} = [ grep
- { not exists $force_requires_if_author{$_->[0]} }
- ( @{Meta->{values}{build_requires}} )
+ { exists $original_build_requires{$_->[0]} }
+ ( @{Meta->{values}{build_requires}} )
];
Meta->write;
Added: DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class/Admin/Types.pm
===================================================================
--- DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class/Admin/Types.pm (rev 0)
+++ DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class/Admin/Types.pm 2010-02-14 10:46:07 UTC (rev 8704)
@@ -0,0 +1,48 @@
+package # hide from PAUSE
+ DBIx::Class::Admin::Types;
+
+use MooseX::Types -declare => [qw(
+ DBICConnectInfo
+ DBICArrayRef
+ DBICHashRef
+)];
+use MooseX::Types::Moose qw/Int HashRef ArrayRef Str Any Bool/;
+use MooseX::Types::JSON qw(JSON);
+
+subtype DBICArrayRef,
+ as ArrayRef;
+
+subtype DBICHashRef,
+ as HashRef;
+
+coerce DBICArrayRef,
+ from JSON,
+ via { _json_to_data ($_) };
+
+coerce DBICHashRef,
+ from JSON,
+ via { _json_to_data($_) };
+
+subtype DBICConnectInfo,
+ as ArrayRef;
+
+coerce DBICConnectInfo,
+ from JSON,
+ via { return _json_to_data($_) } ;
+
+coerce DBICConnectInfo,
+ from Str,
+ via { return _json_to_data($_) };
+
+coerce DBICConnectInfo,
+ from HashRef,
+ via { [ $_ ] };
+
+sub _json_to_data {
+ my ($json_str) = @_;
+ my $json = JSON::Any->new(allow_barekey => 1, allow_singlequote => 1, relaxed=>1);
+ my $ret = $json->jsonToObj($json_str);
+ return $ret;
+}
+
+1;
Added: DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class/Admin.pm
===================================================================
--- DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class/Admin.pm (rev 0)
+++ DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class/Admin.pm 2010-02-14 10:46:07 UTC (rev 8704)
@@ -0,0 +1,568 @@
+package DBIx::Class::Admin;
+
+# check deps
+BEGIN {
+ use Carp::Clan qw/^DBIx::Class/;
+ use DBIx::Class;
+ croak('The following modules are required for DBIx::Class::Admin ' . DBIx::Class::Optional::Dependencies->req_missing_for ('admin') )
+ unless DBIx::Class::Optional::Dependencies->req_ok_for ('admin');
+}
+
+use Moose;
+use MooseX::Types::Moose qw/Int Str Any Bool/;
+use DBIx::Class::Admin::Types qw/DBICConnectInfo DBICHashRef/;
+use MooseX::Types::JSON qw(JSON);
+use MooseX::Types::Path::Class qw(Dir File);
+use Try::Tiny;
+use JSON::Any qw(DWIW XS JSON);
+use namespace::autoclean;
+
+=head1 NAME
+
+DBIx::Class::Admin - Administration object for schemas
+
+=head1 SYNOPSIS
+
+ $ dbicadmin --help
+
+ $ dbicadmin --schema=MyApp::Schema \
+ --connect='["dbi:SQLite:my.db", "", ""]' \
+ --deploy
+
+ $ dbicadmin --schema=MyApp::Schema --class=Employee \
+ --connect='["dbi:SQLite:my.db", "", ""]' \
+ --op=update --set='{ "name": "New_Employee" }'
+
+ use DBIx::Class::Admin;
+
+ # ddl manipulation
+ my $admin = DBIx::Class::Admin->new(
+ schema_class=> 'MY::Schema',
+ sql_dir=> $sql_dir,
+ connect_info => { dsn => $dsn, user => $user, password => $pass },
+ );
+
+ # create SQLite sql
+ $admin->create('SQLite');
+
+ # create SQL diff for an upgrade
+ $admin->create('SQLite', {} , "1.0");
+
+ # upgrade a database
+ $admin->upgrade();
+
+ # install a version for an unversioned schema
+ $admin->install("3.0");
+
+=head1 REQUIREMENTS
+
+The Admin interface has additional requirements not currently part of
+L<DBIx::Class>. See L<DBIx::Class::Optional::Dependencies> for more details.
+
+=head1 ATTRIBUTES
+
+=head2 schema_class
+
+the class of the schema to load
+
+=cut
+
+has 'schema_class' => (
+ is => 'ro',
+ isa => Str,
+);
+
+
+=head2 schema
+
+A pre-connected schema object can be provided for manipulation
+
+=cut
+
+has 'schema' => (
+ is => 'ro',
+ isa => 'DBIx::Class::Schema',
+ lazy_build => 1,
+);
+
+sub _build_schema {
+ my ($self) = @_;
+ require Class::MOP;
+ Class::MOP::load_class($self->schema_class);
+
+ $self->connect_info->[3]->{ignore_version} =1;
+ return $self->schema_class->connect(@{$self->connect_info()} ); # , $self->connect_info->[3], { ignore_version => 1} );
+}
+
+
+=head2 resultset
+
+a resultset from the schema to operate on
+
+=cut
+
+has 'resultset' => (
+ is => 'rw',
+ isa => Str,
+);
+
+
+=head2 where
+
+a hash ref or json string to be used for identifying data to manipulate
+
+=cut
+
+has 'where' => (
+ is => 'rw',
+ isa => DBICHashRef,
+ coerce => 1,
+);
+
+
+=head2 set
+
+a hash ref or json string to be used for inserting or updating data
+
+=cut
+
+has 'set' => (
+ is => 'rw',
+ isa => DBICHashRef,
+ coerce => 1,
+);
+
+
+=head2 attrs
+
+a hash ref or json string to be used for passing additonal info to the ->search call
+
+=cut
+
+has 'attrs' => (
+ is => 'rw',
+ isa => DBICHashRef,
+ coerce => 1,
+);
+
+
+=head2 connect_info
+
+connect_info the arguments to provide to the connect call of the schema_class
+
+=cut
+
+has 'connect_info' => (
+ is => 'ro',
+ isa => DBICConnectInfo,
+ lazy_build => 1,
+ coerce => 1,
+);
+
+sub _build_connect_info {
+ my ($self) = @_;
+ return $self->_find_stanza($self->config, $self->config_stanza);
+}
+
+
+=head2 config_file
+
+config_file provide a config_file to read connect_info from, if this is provided
+config_stanze should also be provided to locate where the connect_info is in the config
+The config file should be in a format readable by Config::General
+
+=cut
+
+has config_file => (
+ is => 'ro',
+ isa => File,
+ coerce => 1,
+);
+
+
+=head2 config_stanza
+
+config_stanza for use with config_file should be a '::' deliminated 'path' to the connection information
+designed for use with catalyst config files
+
+=cut
+
+has 'config_stanza' => (
+ is => 'ro',
+ isa => Str,
+);
+
+
+=head2 config
+
+Instead of loading from a file the configuration can be provided directly as a hash ref. Please note
+config_stanza will still be required.
+
+=cut
+
+has config => (
+ is => 'ro',
+ isa => DBICHashRef,
+ lazy_build => 1,
+);
+
+sub _build_config {
+ my ($self) = @_;
+
+ eval { require Config::Any }
+ or die ("Config::Any is required to parse the config file.\n");
+
+ my $cfg = Config::Any->load_files ( {files => [$self->config_file], use_ext =>1, flatten_to_hash=>1});
+
+ # just grab the config from the config file
+ $cfg = $cfg->{$self->config_file};
+ return $cfg;
+}
+
+
+=head2 sql_dir
+
+The location where sql ddl files should be created or found for an upgrade.
+
+=cut
+
+has 'sql_dir' => (
+ is => 'ro',
+ isa => Dir,
+ coerce => 1,
+);
+
+
+=head2 version
+
+Used for install, the version which will be 'installed' in the schema
+
+=cut
+
+has version => (
+ is => 'rw',
+ isa => Str,
+);
+
+
+=head2 preversion
+
+Previouse version of the schema to create an upgrade diff for, the full sql for that version of the sql must be in the sql_dir
+
+=cut
+
+has preversion => (
+ is => 'rw',
+ isa => Str,
+);
+
+
+=head2 force
+
+Try and force certain operations.
+
+=cut
+
+has force => (
+ is => 'rw',
+ isa => Bool,
+);
+
+
+=head2 quiet
+
+Be less verbose about actions
+
+=cut
+
+has quiet => (
+ is => 'rw',
+ isa => Bool,
+);
+
+has '_confirm' => (
+ is => 'bare',
+ isa => Bool,
+);
+
+
+=head1 METHODS
+
+=head2 create
+
+=over 4
+
+=item Arguments: $sqlt_type, \%sqlt_args, $preversion
+
+=back
+
+L<create> will generate sql for the supplied schema_class in sql_dir. The flavour of sql to
+generate can be controlled by suppling a sqlt_type which should be a L<SQL::Translator> name.
+
+Arguments for L<SQL::Translator> can be supplied in the sqlt_args hashref.
+
+Optional preversion can be supplied to generate a diff to be used by upgrade.
+
+=cut
+
+sub create {
+ my ($self, $sqlt_type, $sqlt_args, $preversion) = @_;
+
+ $preversion ||= $self->preversion();
+
+ my $schema = $self->schema();
+ # create the dir if does not exist
+ $self->sql_dir->mkpath() if ( ! -d $self->sql_dir);
+
+ $schema->create_ddl_dir( $sqlt_type, (defined $schema->schema_version ? $schema->schema_version : ""), $self->sql_dir->stringify, $preversion, $sqlt_args );
+}
+
+
+=head2 upgrade
+
+=over 4
+
+=item Arguments: <none>
+
+=back
+
+upgrade will attempt to upgrade the connected database to the same version as the schema_class.
+B<MAKE SURE YOU BACKUP YOUR DB FIRST>
+
+=cut
+
+sub upgrade {
+ my ($self) = @_;
+ my $schema = $self->schema();
+ if (!$schema->get_db_version()) {
+ # schema is unversioned
+ $schema->throw_exception ("Could not determin current schema version, please either install() or deploy().\n");
+ } else {
+ my $ret = $schema->upgrade();
+ return $ret;
+ }
+}
+
+
+=head2 install
+
+=over 4
+
+=item Arguments: $version
+
+=back
+
+install is here to help when you want to move to L<DBIx::Class::Schema::Versioned> and have an existing
+database. install will take a version and add the version tracking tables and 'install' the version. No
+further ddl modification takes place. Setting the force attribute to a true value will allow overriding of
+already versioned databases.
+
+=cut
+
+sub install {
+ my ($self, $version) = @_;
+
+ my $schema = $self->schema();
+ $version ||= $self->version();
+ if (!$schema->get_db_version() ) {
+ # schema is unversioned
+ print "Going to install schema version\n";
+ my $ret = $schema->install($version);
+ print "retun is $ret\n";
+ }
+ elsif ($schema->get_db_version() and $self->force ) {
+ carp "Forcing install may not be a good idea";
+ if($self->_confirm() ) {
+ $self->schema->_set_db_version({ version => $version});
+ }
+ }
+ else {
+ $schema->throw_exception ("Schema already has a version. Try upgrade instead.\n");
+ }
+
+}
+
+
+=head2 deploy
+
+=over 4
+
+=item Arguments: $args
+
+=back
+
+deploy will create the schema at the connected database. C<$args> are passed straight to
+L<DBIx::Class::Schema/deploy>.
+
+=cut
+
+sub deploy {
+ my ($self, $args) = @_;
+ my $schema = $self->schema();
+ if (!$schema->get_db_version() ) {
+ # schema is unversioned
+ $schema->deploy( $args, $self->sql_dir)
+ or $schema->throw_exception ("Could not deploy schema.\n"); # FIXME deploy() does not return 1/0 on success/fail
+ } else {
+ $schema->throw_exception("A versioned schema has already been deployed, try upgrade instead.\n");
+ }
+}
+
+=head2 insert
+
+=over 4
+
+=item Arguments: $rs, $set
+
+=back
+
+insert takes the name of a resultset from the schema_class and a hashref of data to insert
+into that resultset
+
+=cut
+
+sub insert {
+ my ($self, $rs, $set) = @_;
+
+ $rs ||= $self->resultset();
+ $set ||= $self->set();
+ my $resultset = $self->schema->resultset($rs);
+ my $obj = $resultset->create( $set );
+ print ''.ref($resultset).' ID: '.join(',',$obj->id())."\n" if (!$self->quiet);
+}
+
+
+=head2 update
+
+=over 4
+
+=item Arguments: $rs, $set, $where
+
+=back
+
+update takes the name of a resultset from the schema_class, a hashref of data to update and
+a where hash used to form the search for the rows to update.
+
+=cut
+
+sub update {
+ my ($self, $rs, $set, $where) = @_;
+
+ $rs ||= $self->resultset();
+ $where ||= $self->where();
+ $set ||= $self->set();
+ my $resultset = $self->schema->resultset($rs);
+ $resultset = $resultset->search( ($where||{}) );
+
+ my $count = $resultset->count();
+ print "This action will modify $count ".ref($resultset)." records.\n" if (!$self->quiet);
+
+ if ( $self->force || $self->_confirm() ) {
+ $resultset->update_all( $set );
+ }
+}
+
+
+=head2 delete
+
+=over 4
+
+=item Arguments: $rs, $where, $attrs
+
+=back
+
+delete takes the name of a resultset from the schema_class, a where hashref and a attrs to pass to ->search.
+The found data is deleted and cannot be recovered.
+
+=cut
+
+sub delete {
+ my ($self, $rs, $where, $attrs) = @_;
+
+ $rs ||= $self->resultset();
+ $where ||= $self->where();
+ $attrs ||= $self->attrs();
+ my $resultset = $self->schema->resultset($rs);
+ $resultset = $resultset->search( ($where||{}), ($attrs||()) );
+
+ my $count = $resultset->count();
+ print "This action will delete $count ".ref($resultset)." records.\n" if (!$self->quiet);
+
+ if ( $self->force || $self->_confirm() ) {
+ $resultset->delete_all();
+ }
+}
+
+
+=head2 select
+
+=over 4
+
+=item Arguments: $rs, $where, $attrs
+
+=back
+
+select takes the name of a resultset from the schema_class, a where hashref and a attrs to pass to ->search.
+The found data is returned in a array ref where the first row will be the columns list.
+
+=cut
+
+sub select {
+ my ($self, $rs, $where, $attrs) = @_;
+
+ $rs ||= $self->resultset();
+ $where ||= $self->where();
+ $attrs ||= $self->attrs();
+ my $resultset = $self->schema->resultset($rs);
+ $resultset = $resultset->search( ($where||{}), ($attrs||()) );
+
+ my @data;
+ my @columns = $resultset->result_source->columns();
+ push @data, [@columns];#
+
+ while (my $row = $resultset->next()) {
+ my @fields;
+ foreach my $column (@columns) {
+ push( @fields, $row->get_column($column) );
+ }
+ push @data, [@fields];
+ }
+
+ return \@data;
+}
+
+sub _confirm {
+ my ($self) = @_;
+ print "Are you sure you want to do this? (type YES to confirm) \n";
+ # mainly here for testing
+ return 1 if ($self->meta->get_attribute('_confirm')->get_value($self));
+ my $response = <STDIN>;
+ return 1 if ($response=~/^YES/);
+ return;
+}
+
+sub _find_stanza {
+ my ($self, $cfg, $stanza) = @_;
+ my @path = split /::/, $stanza;
+ while (my $path = shift @path) {
+ if (exists $cfg->{$path}) {
+ $cfg = $cfg->{$path};
+ }
+ else {
+ die ("Could not find $stanza in config, $path does not seem to exist.\n");
+ }
+ }
+ return $cfg;
+}
+
+=head1 AUTHOR
+
+See L<DBIx::Class/CONTRIBUTORS>.
+
+=head1 LICENSE
+
+You may distribute this code under the same terms as Perl itself
+
+=cut
+
+1;
Modified: DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class/Manual/FAQ.pod
===================================================================
--- DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class/Manual/FAQ.pod 2010-02-14 10:41:14 UTC (rev 8703)
+++ DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class/Manual/FAQ.pod 2010-02-14 10:46:07 UTC (rev 8704)
@@ -641,7 +641,7 @@
second one will use a default port of 5433, while L<DBD::Pg> is compiled with a
default port of 5432.
-You can chance the port setting in C<postgresql.conf>.
+You can change the port setting in C<postgresql.conf>.
=item I've lost or forgotten my mysql password
Property changes on: DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class/Optional
___________________________________________________________________
Name: svn:ignore
+ Dependencies.pod
Added: DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class/Optional/Dependencies.pm
===================================================================
--- DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class/Optional/Dependencies.pm (rev 0)
+++ DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class/Optional/Dependencies.pm 2010-02-14 10:46:07 UTC (rev 8704)
@@ -0,0 +1,376 @@
+package DBIx::Class::Optional::Dependencies;
+
+use warnings;
+use strict;
+
+use Carp;
+
+# NO EXTERNAL NON-5.8.1 CORE DEPENDENCIES EVER (e.g. C::A::G)
+# This module is to be loaded by Makefile.PM on a pristine system
+
+# POD is generated automatically by calling _gen_pod from the
+# Makefile.PL in $AUTHOR mode
+
+my $moose_basic = {
+ 'Moose' => '0.98',
+ 'MooseX::Types' => '0.21',
+};
+
+my $admin_basic = {
+ %$moose_basic,
+ 'MooseX::Types::Path::Class' => '0.05',
+ 'MooseX::Types::JSON' => '0.02',
+ 'JSON::Any' => '1.22',
+ 'namespace::autoclean' => '0.09',
+};
+
+my $reqs = {
+ dist => {
+ #'Module::Install::Pod::Inherit' => '0.01',
+ },
+
+ replicated => {
+ req => {
+ %$moose_basic,
+ 'namespace::clean' => '0.11',
+ 'Hash::Merge' => '0.11',
+ },
+ pod => {
+ title => 'Storage::Replicated',
+ desc => 'Modules required for L<DBIx::Class::Storage::DBI::Replicated>',
+ },
+ },
+
+ admin => {
+ req => {
+ %$admin_basic,
+ },
+ pod => {
+ title => 'DBIx::Class::Admin',
+ desc => 'Modules required for the DBIx::Class administrative library',
+ },
+ },
+
+ admin_script => {
+ req => {
+ %$moose_basic,
+ %$admin_basic,
+ 'Getopt::Long::Descriptive' => '0.081',
+ 'Text::CSV' => '1.16',
+ },
+ pod => {
+ title => 'dbicadmin',
+ desc => 'Modules required for the CLI DBIx::Class interface dbicadmin',
+ },
+ },
+
+ deploy => {
+ req => {
+ 'SQL::Translator' => '0.11002',
+ },
+ pod => {
+ title => 'Storage::DBI::deploy()',
+ desc => 'Modules required for L<DBIx::Class::Storage::DBI/deploy> and L<DBIx::Class::Storage::DBI/deploymen_statements>',
+ },
+ },
+
+ author => {
+ req => {
+ 'Test::Pod' => '1.26',
+ 'Test::Pod::Coverage' => '1.08',
+ 'Pod::Coverage' => '0.20',
+ #'Test::NoTabs' => '0.9',
+ #'Test::EOL' => '0.6',
+ },
+ },
+
+ core => {
+ req => {
+ # t/52cycle.t
+ 'Test::Memory::Cycle' => '0',
+ 'Devel::Cycle' => '1.10',
+
+ # t/36datetime.t
+ # t/60core.t
+ 'DateTime::Format::SQLite' => '0',
+
+ # t/96_is_deteministic_value.t
+ 'DateTime::Format::Strptime'=> '0',
+ },
+ },
+
+ cdbicompat => {
+ req => {
+ 'DBIx::ContextualFetch' => '0',
+ 'Class::DBI::Plugin::DeepAbstractSearch' => '0',
+ 'Class::Trigger' => '0',
+ 'Time::Piece::MySQL' => '0',
+ 'Clone' => '0',
+ 'Date::Simple' => '3.03',
+ },
+ },
+
+ rdbms_pg => {
+ req => {
+ $ENV{DBICTEST_PG_DSN}
+ ? (
+ 'Sys::SigAction' => '0',
+ 'DBD::Pg' => '2.009002',
+ 'DateTime::Format::Pg' => '0',
+ ) : ()
+ },
+ },
+
+ rdbms_mysql => {
+ req => {
+ $ENV{DBICTEST_MYSQL_DSN}
+ ? (
+ 'DateTime::Format::MySQL' => '0',
+ 'DBD::mysql' => '0',
+ ) : ()
+ },
+ },
+
+ rdbms_oracle => {
+ req => {
+ $ENV{DBICTEST_ORA_DSN}
+ ? (
+ 'DateTime::Format::Oracle' => '0',
+ ) : ()
+ },
+ },
+
+ rdbms_ase => {
+ req => {
+ $ENV{DBICTEST_SYBASE_DSN}
+ ? (
+ 'DateTime::Format::Sybase' => 0,
+ ) : ()
+ },
+ },
+
+ rdbms_asa => {
+ req => {
+ grep $_, @ENV{qw/DBICTEST_SYBASE_ASA_DSN DBICTEST_SYBASE_ASA_ODBC_DSN/}
+ ? (
+ 'DateTime::Format::Strptime' => 0,
+ ) : ()
+ },
+ },
+};
+
+
+sub _all_optional_requirements {
+ return { map { %{ $reqs->{$_}{req} || {} } } (keys %$reqs) };
+}
+
+sub req_list_for {
+ my ($class, $group) = @_;
+
+ croak "req_list_for() expects a requirement group name"
+ unless $group;
+
+ my $deps = $reqs->{$group}{req}
+ or croak "Requirement group '$group' does not exist";
+
+ return { %$deps };
+}
+
+
+our %req_availability_cache;
+sub req_ok_for {
+ my ($class, $group) = @_;
+
+ croak "req_ok_for() expects a requirement group name"
+ unless $group;
+
+ $class->_check_deps ($group) unless $req_availability_cache{$group};
+
+ return $req_availability_cache{$group}{status};
+}
+
+sub req_missing_for {
+ my ($class, $group) = @_;
+
+ croak "req_missing_for() expects a requirement group name"
+ unless $group;
+
+ $class->_check_deps ($group) unless $req_availability_cache{$group};
+
+ return $req_availability_cache{$group}{missing};
+}
+
+sub req_errorlist_for {
+ my ($class, $group) = @_;
+
+ croak "req_errorlist_for() expects a requirement group name"
+ unless $group;
+
+ $class->_check_deps ($group) unless $req_availability_cache{$group};
+
+ return $req_availability_cache{$group}{errorlist};
+}
+
+sub _check_deps {
+ my ($class, $group) = @_;
+
+ my $deps = $class->req_list_for ($group);
+
+ my %errors;
+ for my $mod (keys %$deps) {
+ if (my $ver = $deps->{$mod}) {
+ eval "use $mod $ver ()";
+ }
+ else {
+ eval "require $mod";
+ }
+
+ $errors{$mod} = $@ if $@;
+ }
+
+ if (keys %errors) {
+ my $missing = join (', ', map { $deps->{$_} ? "$_ >= $deps->{$_}" : $_ } (sort keys %errors) );
+ $missing .= " (see $class for details)" if $reqs->{$group}{pod};
+ $req_availability_cache{$group} = {
+ status => 0,
+ errorlist => { %errors },
+ missing => $missing,
+ };
+ }
+ else {
+ $req_availability_cache{$group} = {
+ status => 1,
+ errorlist => {},
+ missing => '',
+ };
+ }
+}
+
+sub _gen_pod {
+ my $class = shift;
+ my $modfn = __PACKAGE__ . '.pm';
+ $modfn =~ s/\:\:/\//g;
+
+ my @chunks = (
+ <<"EOC",
+#########################################################################
+##################### A U T O G E N E R A T E D ########################
+#########################################################################
+#
+# The contents of this POD file are auto-generated. Any changes you make
+# will be lost. If you need to change the generated text edit _gen_pod()
+# at the end of $modfn
+#
+EOC
+ '=head1 NAME',
+ "$class - Optional module dependency specifications",
+ '=head1 DESCRIPTION',
+ <<'EOD',
+Some of the less-frequently used features of L<DBIx::Class> have external
+module dependencies on their own. In order not to burden the average user
+with modules he will never use, these optional dependencies are not included
+in the base Makefile.PL. Instead an exception with a descriptive message is
+thrown when a specific feature is missing one or several modules required for
+its operation. This module is the central holding place for the current list
+of such dependencies.
+EOD
+ '=head1 CURRENT REQUIREMENT GROUPS',
+ <<'EOD',
+Dependencies are organized in C<groups> and each group can list one or more
+required modules, with an optional minimum version (or 0 for any version).
+The group name can be used in the
+EOD
+ );
+
+ for my $group (sort keys %$reqs) {
+ my $p = $reqs->{$group}{pod}
+ or next;
+
+ my $modlist = $reqs->{$group}{req}
+ or next;
+
+ next unless keys %$modlist;
+
+ push @chunks, (
+ "=head2 $p->{title}",
+ "$p->{desc}",
+ '=over',
+ ( map { "=item * $_" . ($modlist->{$_} ? " >= $modlist->{$_}" : '') } (sort keys %$modlist) ),
+ '=back',
+ "Requirement group: B<$group>",
+ );
+ }
+
+ push @chunks, (
+ '=head1 METHODS',
+ '=head2 req_list_for',
+ '=over',
+ '=item Arguments: $group_name',
+ '=item Returns: \%list_of_module_version_pairs',
+ '=back',
+ <<EOD,
+This method should be used by DBIx::Class extension authors, to determine the
+version of modules which a specific feature requires in the current version of
+DBIx::Class. For example if you write a module/extension that requires
+DBIx::Class and also requires the availability of
+L<DBIx::Class::Storage::DBI/deploy>, you can do the following in your
+C<Makefile.PL> or C<Build.PL>
+
+ require $class;
+ my \$dep_list = $class->req_list_for ('deploy');
+
+Which will give you a list of module/version pairs necessary for the particular
+feature to function with this version of DBIx::Class.
+EOD
+
+ '=head2 req_ok_for',
+ '=over',
+ '=item Arguments: $group_name',
+ '=item Returns: 1|0',
+ '=back',
+ 'Returns true or false depending on whether all modules required by C<$group_name> are present on the system and loadable',
+
+ '=head2 req_missing_for',
+ '=over',
+ '=item Arguments: $group_name',
+ '=item Returns: $error_message_string',
+ '=back',
+ <<EOD,
+Returns a single line string suitable for inclusion in larger error messages.
+This method would normally be used by DBIx::Class core-module author, to
+indicate to the user that he needs to install specific modules before he will
+be able to use a specific feature.
+
+For example if the requirements for C<replicated> are not available, the
+returned string would look like:
+
+ Moose >= 0.98, MooseX::Types >= 0.21, namespace::clean (see $class for details)
+
+The author is expected to prepend the necessary text to this message before
+returning the actual error seen by the user.
+EOD
+
+ '=head2 req_errorlist_for',
+ '=over',
+ '=item Arguments: $group_name',
+ '=item Returns: \%list_of_loaderrors_per_module',
+ '=back',
+ <<'EOD',
+Returns a hashref containing the actual errors that occured while attempting
+to load each module in the requirement group.
+EOD
+ '=head1 AUTHOR',
+ 'See L<DBIx::Class/CONTRIBUTORS>.',
+ '=head1 LICENSE',
+ 'You may distribute this code under the same terms as Perl itself',
+ );
+
+ my $fn = __FILE__;
+ $fn =~ s/\.pm$/\.pod/;
+
+ open (my $fh, '>', $fn) or croak "Unable to write to $fn: $!";
+ print $fh join ("\n\n", @chunks);
+ close ($fh);
+}
+
+1;
Modified: DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class/ResultSet.pm
===================================================================
--- DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class/ResultSet.pm 2010-02-14 10:41:14 UTC (rev 8703)
+++ DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class/ResultSet.pm 2010-02-14 10:46:07 UTC (rev 8704)
@@ -291,10 +291,15 @@
$rows = $self->get_cache;
}
+ # reset the selector list
+ if (List::Util::first { exists $attrs->{$_} } qw{columns select as}) {
+ delete @{$our_attrs}{qw{select as columns +select +as +columns include_columns}};
+ }
+
my $new_attrs = { %{$our_attrs}, %{$attrs} };
# merge new attrs into inherited
- foreach my $key (qw/join prefetch +select +as bind/) {
+ foreach my $key (qw/join prefetch +select +as +columns include_columns bind/) {
next unless exists $attrs->{$key};
$new_attrs->{$key} = $self->_merge_attr($our_attrs->{$key}, $attrs->{$key});
}
@@ -2431,6 +2436,23 @@
return !!$self->{attrs}{page};
}
+=head2 is_ordered
+
+=over 4
+
+=item Arguments: none
+
+=item Return Value: true, if the resultset has been ordered with C<order_by>.
+
+=back
+
+=cut
+
+sub is_ordered {
+ my ($self) = @_;
+ return scalar $self->result_source->storage->_parse_order_by($self->{attrs}{order_by});
+}
+
=head2 related_resultset
=over 4
@@ -2743,41 +2765,46 @@
# build columns (as long as select isn't set) into a set of as/select hashes
unless ( $attrs->{select} ) {
- my @cols = ( ref($attrs->{columns}) eq 'ARRAY' )
- ? @{ delete $attrs->{columns}}
- : (
- ( delete $attrs->{columns} )
- ||
- $source->columns
- )
- ;
+ my @cols;
+ if ( ref $attrs->{columns} eq 'ARRAY' ) {
+ @cols = @{ delete $attrs->{columns}}
+ } elsif ( defined $attrs->{columns} ) {
+ @cols = delete $attrs->{columns}
+ } else {
+ @cols = $source->columns
+ }
- @colbits = map {
- ( ref($_) eq 'HASH' )
- ? $_
- : {
- (
- /^\Q${alias}.\E(.+)$/
- ? "$1"
- : "$_"
- )
- =>
- (
- /\./
- ? "$_"
- : "${alias}.$_"
- )
- }
- } @cols;
+ for (@cols) {
+ if ( ref $_ eq 'HASH' ) {
+ push @colbits, $_
+ } else {
+ my $key = /^\Q${alias}.\E(.+)$/
+ ? "$1"
+ : "$_";
+ my $value = /\./
+ ? "$_"
+ : "${alias}.$_";
+ push @colbits, { $key => $value };
+ }
+ }
}
# add the additional columns on
- foreach ( 'include_columns', '+columns' ) {
- push @colbits, map {
- ( ref($_) eq 'HASH' )
- ? $_
- : { ( split( /\./, $_ ) )[-1] => ( /\./ ? $_ : "${alias}.$_" ) }
- } ( ref($attrs->{$_}) eq 'ARRAY' ) ? @{ delete $attrs->{$_} } : delete $attrs->{$_} if ( $attrs->{$_} );
+ foreach (qw{include_columns +columns}) {
+ if ( $attrs->{$_} ) {
+ my @list = ( ref($attrs->{$_}) eq 'ARRAY' )
+ ? @{ delete $attrs->{$_} }
+ : delete $attrs->{$_};
+ for (@list) {
+ if ( ref($_) eq 'HASH' ) {
+ push @colbits, $_
+ } else {
+ my $key = ( split /\./, $_ )[-1];
+ my $value = ( /\./ ? $_ : "$alias.$_" );
+ push @colbits, { $key => $value };
+ }
+ }
+ }
}
# start with initial select items
@@ -2786,15 +2813,22 @@
( ref $attrs->{select} eq 'ARRAY' )
? [ @{ $attrs->{select} } ]
: [ $attrs->{select} ];
- $attrs->{as} = (
- $attrs->{as}
- ? (
- ref $attrs->{as} eq 'ARRAY'
- ? [ @{ $attrs->{as} } ]
- : [ $attrs->{as} ]
+
+ if ( $attrs->{as} ) {
+ $attrs->{as} =
+ (
+ ref $attrs->{as} eq 'ARRAY'
+ ? [ @{ $attrs->{as} } ]
+ : [ $attrs->{as} ]
)
- : [ map { m/^\Q${alias}.\E(.+)$/ ? $1 : $_ } @{ $attrs->{select} } ]
- );
+ } else {
+ $attrs->{as} = [ map {
+ m/^\Q${alias}.\E(.+)$/
+ ? $1
+ : $_
+ } @{ $attrs->{select} }
+ ]
+ }
}
else {
@@ -2804,27 +2838,24 @@
}
# now add colbits to select/as
- push( @{ $attrs->{select} }, map { values( %{$_} ) } @colbits );
- push( @{ $attrs->{as} }, map { keys( %{$_} ) } @colbits );
+ push @{ $attrs->{select} }, map values %{$_}, @colbits;
+ push @{ $attrs->{as} }, map keys %{$_}, @colbits;
- my $adds;
- if ( $adds = delete $attrs->{'+select'} ) {
+ if ( my $adds = delete $attrs->{'+select'} ) {
$adds = [$adds] unless ref $adds eq 'ARRAY';
- push(
- @{ $attrs->{select} },
- map { /\./ || ref $_ ? $_ : "${alias}.$_" } @$adds
- );
+ push @{ $attrs->{select} },
+ map { /\./ || ref $_ ? $_ : "$alias.$_" } @$adds;
}
- if ( $adds = delete $attrs->{'+as'} ) {
+ if ( my $adds = delete $attrs->{'+as'} ) {
$adds = [$adds] unless ref $adds eq 'ARRAY';
- push( @{ $attrs->{as} }, @$adds );
+ push @{ $attrs->{as} }, @$adds;
}
- $attrs->{from} ||= [ {
+ $attrs->{from} ||= [{
-source_handle => $source->handle,
-alias => $self->{attrs}{alias},
$self->{attrs}{alias} => $source->from,
- } ];
+ }];
if ( $attrs->{join} || $attrs->{prefetch} ) {
@@ -2844,7 +2875,7 @@
$join,
$alias,
{ %{ $attrs->{seen_join} || {} } },
- ($attrs->{seen_join} && keys %{$attrs->{seen_join}})
+ ( $attrs->{seen_join} && keys %{$attrs->{seen_join}})
? $attrs->{from}[-1][0]{-join_path}
: []
,
Modified: DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class/ResultSetColumn.pm
===================================================================
--- DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class/ResultSetColumn.pm 2010-02-14 10:41:14 UTC (rev 8703)
+++ DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class/ResultSetColumn.pm 2010-02-14 10:46:07 UTC (rev 8704)
@@ -83,11 +83,6 @@
$new_parent_rs ||= $rs->search_rs;
my $new_attrs = $new_parent_rs->{attrs} ||= {};
- # FIXME - this should go away when the chaining branch is merged
- # since what we do is actually chain to the original resultset, we need to throw
- # away all selectors (otherwise they'll chain)
- delete $new_attrs->{$_} for (qw/columns +columns select +select as +as cols include_columns/);
-
# prefetch causes additional columns to be fetched, but we can not just make a new
# rs via the _resolved_attrs trick - we need to retain the separation between
# +select/+as and select/as. At the same time we want to preserve any joins that the
Modified: DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class/Schema/Versioned.pm
===================================================================
--- DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class/Schema/Versioned.pm 2010-02-14 10:41:14 UTC (rev 8703)
+++ DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class/Schema/Versioned.pm 2010-02-14 10:46:07 UTC (rev 8704)
@@ -617,8 +617,9 @@
return;
}
- $self->throw_exception($self->storage->_sqlt_version_error)
- if (not $self->storage->_sqlt_version_ok);
+ unless (DBIx::Class::Optional::Dependencies->req_ok_for ('deploy')) {
+ $self->throw_exception("Unable to proceed without " . DBIx::Class::Optional::Dependencies->req_missing_for ('deploy') );
+ }
my $db_tr = SQL::Translator->new({
add_drop_table => 1,
Modified: DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class/Storage/DBI/ODBC.pm
===================================================================
--- DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class/Storage/DBI/ODBC.pm 2010-02-14 10:41:14 UTC (rev 8703)
+++ DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class/Storage/DBI/ODBC.pm 2010-02-14 10:46:07 UTC (rev 8704)
@@ -21,15 +21,6 @@
}
}
-sub _dbh_last_insert_id {
- my ($self, $dbh, $source, $col) = @_;
-
- # punt: if there is no derived class for the specific backend, attempt
- # to use the DBI->last_insert_id, which may not be sufficient (see the
- # discussion of last_insert_id in perldoc DBI)
- return $dbh->last_insert_id(undef, undef, $source->from, $col);
-}
-
1;
=head1 NAME
Modified: DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class/Storage/DBI/Replicated/Pool.pm
===================================================================
--- DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class/Storage/DBI/Replicated/Pool.pm 2010-02-14 10:41:14 UTC (rev 8703)
+++ DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class/Storage/DBI/Replicated/Pool.pm 2010-02-14 10:46:07 UTC (rev 8704)
@@ -7,6 +7,7 @@
use DBI ();
use Carp::Clan qw/^DBIx::Class/;
use MooseX::Types::Moose qw/Num Int ClassName HashRef/;
+use DBIx::Class::Storage::DBI::Replicated::Types 'DBICStorageDBI';
use namespace::clean -except => 'meta';
@@ -152,6 +153,14 @@
},
);
+=head2 master
+
+Reference to the master Storage.
+
+=cut
+
+has master => (is => 'rw', isa => DBICStorageDBI, weak_ref => 1);
+
=head1 METHODS
This class defines the following methods.
@@ -243,7 +252,13 @@
$replicant->_determine_driver
});
- DBIx::Class::Storage::DBI::Replicated::Replicant->meta->apply($replicant);
+ Moose::Meta::Class->initialize(ref $replicant);
+
+ DBIx::Class::Storage::DBI::Replicated::Replicant->meta->apply($replicant);
+
+ # link back to master
+ $replicant->master($self->master);
+
return $replicant;
}
Modified: DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class/Storage/DBI/Replicated/Replicant.pm
===================================================================
--- DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class/Storage/DBI/Replicated/Replicant.pm 2010-02-14 10:41:14 UTC (rev 8703)
+++ DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class/Storage/DBI/Replicated/Replicant.pm 2010-02-14 10:46:07 UTC (rev 8704)
@@ -4,6 +4,7 @@
requires qw/_query_start/;
with 'DBIx::Class::Storage::DBI::Replicated::WithDSN';
use MooseX::Types::Moose qw/Bool Str/;
+use DBIx::Class::Storage::DBI::Replicated::Types 'DBICStorageDBI';
use namespace::clean -except => 'meta';
@@ -55,6 +56,14 @@
has dsn => (is => 'rw', isa => Str);
has id => (is => 'rw', isa => Str);
+=head2 master
+
+Reference to the master Storage.
+
+=cut
+
+has master => (is => 'rw', isa => DBICStorageDBI, weak_ref => 1);
+
=head1 METHODS
This class defines the following methods.
@@ -66,7 +75,9 @@
=cut
sub debugobj {
- return shift->schema->storage->debugobj;
+ my $self = shift;
+
+ return $self->master->debugobj;
}
=head1 ALSO SEE
Modified: DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class/Storage/DBI/Replicated.pm
===================================================================
--- DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class/Storage/DBI/Replicated.pm 2010-02-14 10:41:14 UTC (rev 8703)
+++ DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class/Storage/DBI/Replicated.pm 2010-02-14 10:46:07 UTC (rev 8704)
@@ -2,27 +2,9 @@
BEGIN {
use Carp::Clan qw/^DBIx::Class/;
-
- ## Modules required for Replication support not required for general DBIC
- ## use, so we explicitly test for these.
-
- my %replication_required = (
- 'Moose' => '0.90',
- 'MooseX::Types' => '0.21',
- 'namespace::clean' => '0.11',
- 'Hash::Merge' => '0.11'
- );
-
- my @didnt_load;
-
- for my $module (keys %replication_required) {
- eval "use $module $replication_required{$module}";
- push @didnt_load, "$module $replication_required{$module}"
- if $@;
- }
-
- croak("@{[ join ', ', @didnt_load ]} are missing and are required for Replication")
- if @didnt_load;
+ use DBIx::Class;
+ croak('The following modules are required for Replication ' . DBIx::Class::Optional::Dependencies->req_missing_for ('replicated') )
+ unless DBIx::Class::Optional::Dependencies->req_ok_for ('replicated');
}
use Moose;
@@ -33,6 +15,7 @@
use MooseX::Types::Moose qw/ClassName HashRef Object/;
use Scalar::Util 'reftype';
use Hash::Merge 'merge';
+use List::Util qw/min max/;
use namespace::clean -except => 'meta';
@@ -118,16 +101,9 @@
=head1 REQUIREMENTS
-Replicated Storage has additional requirements not currently part of L<DBIx::Class>
+Replicated Storage has additional requirements not currently part of
+L<DBIx::Class>. See L<DBIx::Class::Optional::Dependencies> for more details.
- Moose => '0.90',
- MooseX::Types => '0.21',
- namespace::clean => '0.11',
- Hash::Merge => '0.11'
-
-You will need to install these modules manually via CPAN or make them part of the
-Makefile for your distribution.
-
=head1 ATTRIBUTES
This class defines the following attributes.
@@ -276,12 +252,17 @@
select
select_single
columns_info_for
+ _dbh_columns_info_for
+ _select
/],
);
=head2 write_handler
-Defines an object that implements the write side of L<BIx::Class::Storage::DBI>.
+Defines an object that implements the write side of L<BIx::Class::Storage::DBI>,
+as well as methods that don't write or read that can be called on only one
+storage, methods that return a C<$dbh>, and any methods that don't make sense to
+run on a replicant.
=cut
@@ -292,7 +273,10 @@
handles=>[qw/
on_connect_do
on_disconnect_do
+ on_connect_call
+ on_disconnect_call
connect_info
+ _connect_info
throw_exception
sql_maker
sqlt_type
@@ -328,6 +312,59 @@
svp_rollback
svp_begin
svp_release
+ relname_to_table_alias
+ _straight_join_to_node
+ _dbh_last_insert_id
+ _fix_bind_params
+ _default_dbi_connect_attributes
+ _dbi_connect_info
+ auto_savepoint
+ _sqlt_version_ok
+ _query_end
+ bind_attribute_by_data_type
+ transaction_depth
+ _dbh
+ _select_args
+ _dbh_execute_array
+ _sql_maker_args
+ _sql_maker
+ _query_start
+ _sqlt_version_error
+ _per_row_update_delete
+ _dbh_begin_work
+ _dbh_execute_inserts_with_no_binds
+ _select_args_to_query
+ _svp_generate_name
+ _multipk_update_delete
+ source_bind_attributes
+ _normalize_connect_info
+ _parse_connect_do
+ _dbh_commit
+ _execute_array
+ _placeholders_supported
+ _verify_pid
+ savepoints
+ _sqlt_minimum_version
+ _sql_maker_opts
+ _conn_pid
+ _typeless_placeholders_supported
+ _conn_tid
+ _dbh_autocommit
+ _native_data_type
+ _get_dbh
+ sql_maker_class
+ _dbh_rollback
+ _adjust_select_args_for_complex_prefetch
+ _resolve_ident_sources
+ _resolve_column_info
+ _prune_unused_joins
+ _strip_cond_qualifiers
+ _parse_order_by
+ _resolve_aliastypes_from_select_args
+ _execute
+ _do_query
+ _dbh_sth
+ _dbh_execute
/],
);
@@ -391,8 +428,12 @@
my $master = $self->master;
$master->_determine_driver;
Moose::Meta::Class->initialize(ref $master);
+
DBIx::Class::Storage::DBI::Replicated::WithDSN->meta->apply($master);
+ # link pool back to master
+ $self->pool->master($master);
+
$wantarray ? @res : $res;
};
@@ -744,50 +785,35 @@
=head2 debugobj
-set a debug object across all storages
+set a debug object
=cut
sub debugobj {
my $self = shift @_;
- if(@_) {
- foreach my $source ($self->all_storages) {
- $source->debugobj(@_);
- }
- }
- return $self->master->debugobj;
+ return $self->master->debugobj(@_);
}
=head2 debugfh
-set a debugfh object across all storages
+set a debugfh object
=cut
sub debugfh {
my $self = shift @_;
- if(@_) {
- foreach my $source ($self->all_storages) {
- $source->debugfh(@_);
- }
- }
- return $self->master->debugfh;
+ return $self->master->debugfh(@_);
}
=head2 debugcb
-set a debug callback across all storages
+set a debug callback
=cut
sub debugcb {
my $self = shift @_;
- if(@_) {
- foreach my $source ($self->all_storages) {
- $source->debugcb(@_);
- }
- }
- return $self->master->debugcb;
+ return $self->master->debugcb(@_);
}
=head2 disconnect
@@ -818,6 +844,165 @@
$self->master->cursor_class;
}
+=head2 cursor
+
+set cursor class on all storages, or return master's, alias for L</cursor_class>
+above.
+
+=cut
+
+sub cursor {
+ my ($self, $cursor_class) = @_;
+
+ if ($cursor_class) {
+ $_->cursor($cursor_class) for $self->all_storages;
+ }
+ $self->master->cursor;
+}
+
+=head2 unsafe
+
+sets the L<DBIx::Class::Storage::DBI/unsafe> option on all storages or returns
+master's current setting
+
+=cut
+
+sub unsafe {
+ my $self = shift;
+
+ if (@_) {
+ $_->unsafe(@_) for $self->all_storages;
+ }
+
+ return $self->master->unsafe;
+}
+
+=head2 disable_sth_caching
+
+sets the L<DBIx::Class::Storage::DBI/disable_sth_caching> option on all storages
+or returns master's current setting
+
+=cut
+
+sub disable_sth_caching {
+ my $self = shift;
+
+ if (@_) {
+ $_->disable_sth_caching(@_) for $self->all_storages;
+ }
+
+ return $self->master->disable_sth_caching;
+}
+
+=head2 lag_behind_master
+
+returns the highest Replicant L<DBIx::Class::Storage::DBI/lag_behind_master>
+setting
+
+=cut
+
+sub lag_behind_master {
+ my $self = shift;
+
+ return max map $_->lag_behind_master, $self->replicants;
+}
+
+=head2 is_replicating
+
+returns true if all replicants return true for
+L<DBIx::Class::Storage::DBI/is_replicating>
+
+=cut
+
+sub is_replicating {
+ my $self = shift;
+
+ return (grep $_->is_replicating, $self->replicants) == ($self->replicants);
+}
+
+=head2 connect_call_datetime_setup
+
+calls L<DBIx::Class::Storage::DBI/connect_call_datetime_setup> for all storages
+
+=cut
+
+sub connect_call_datetime_setup {
+ my $self = shift;
+ $_->connect_call_datetime_setup for $self->all_storages;
+}
+
+sub _populate_dbh {
+ my $self = shift;
+ $_->_populate_dbh for $self->all_storages;
+}
+
+sub _connect {
+ my $self = shift;
+ $_->_connect for $self->all_storages;
+}
+
+sub _rebless {
+ my $self = shift;
+ $_->_rebless for $self->all_storages;
+}
+
+sub _determine_driver {
+ my $self = shift;
+ $_->_determine_driver for $self->all_storages;
+}
+
+sub _driver_determined {
+ my $self = shift;
+
+ if (@_) {
+ $_->_driver_determined(@_) for $self->all_storages;
+ }
+
+ return $self->master->_driver_determined;
+}
+
+sub _init {
+ my $self = shift;
+
+ $_->_init for $self->all_storages;
+}
+
+sub _run_connection_actions {
+ my $self = shift;
+
+ $_->_run_connection_actions for $self->all_storages;
+}
+
+sub _do_connection_actions {
+ my $self = shift;
+
+ if (@_) {
+ $_->_do_connection_actions(@_) for $self->all_storages;
+ }
+}
+
+sub connect_call_do_sql {
+ my $self = shift;
+ $_->connect_call_do_sql(@_) for $self->all_storages;
+}
+
+sub disconnect_call_do_sql {
+ my $self = shift;
+ $_->disconnect_call_do_sql(@_) for $self->all_storages;
+}
+
+sub _seems_connected {
+ my $self = shift;
+
+ return min map $_->_seems_connected, $self->all_storages;
+}
+
+sub _ping {
+ my $self = shift;
+
+ return min map $_->_ping, $self->all_storages;
+}
+
=head1 GOTCHAS
Due to the fact that replicants can lag behind a master, you must take care to
Modified: DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class/Storage/DBI/SQLAnywhere.pm
===================================================================
--- DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class/Storage/DBI/SQLAnywhere.pm 2010-02-14 10:41:14 UTC (rev 8703)
+++ DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class/Storage/DBI/SQLAnywhere.pm 2010-02-14 10:46:07 UTC (rev 8704)
@@ -43,6 +43,16 @@
$source->column_info($_)->{is_auto_increment}
} $source->columns;
+# user might have an identity PK without is_auto_increment
+ if (not $identity_col) {
+ foreach my $pk_col ($source->primary_columns) {
+ if (not exists $to_insert->{$pk_col}) {
+ $identity_col = $pk_col;
+ last;
+ }
+ }
+ }
+
if ($identity_col && (not exists $to_insert->{$identity_col})) {
my $dbh = $self->_get_dbh;
my $table_name = $source->from;
@@ -112,8 +122,36 @@
);
}
+sub _svp_begin {
+ my ($self, $name) = @_;
+
+ $self->_get_dbh->do("SAVEPOINT $name");
+}
+
+# can't release savepoints that have been rolled back
+sub _svp_release { 1 }
+
+sub _svp_rollback {
+ my ($self, $name) = @_;
+
+ $self->_get_dbh->do("ROLLBACK TO SAVEPOINT $name")
+}
+
1;
+=head1 MAXIMUM CURSORS
+
+A L<DBIx::Class>> application can use a lot of cursors, due to the usage of
+L<DBI/prepare_cached>.
+
+The default cursor maximum is C<50>, which can be a bit too low. This limit can
+be turned off (or increased) by the DBA by executing:
+
+ set option max_statement_count = 0
+ set option max_cursor_count = 0
+
+Highly recommended.
+
=head1 AUTHOR
See L<DBIx::Class/AUTHOR> and L<DBIx::Class/CONTRIBUTORS>.
Modified: DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class/Storage/DBI/SQLite.pm
===================================================================
--- DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class/Storage/DBI/SQLite.pm 2010-02-14 10:41:14 UTC (rev 8703)
+++ DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class/Storage/DBI/SQLite.pm 2010-02-14 10:46:07 UTC (rev 8704)
@@ -10,11 +10,6 @@
use File::Copy;
use File::Spec;
-sub _dbh_last_insert_id {
- my ($self, $dbh, $source, $col) = @_;
- $dbh->func('last_insert_rowid');
-}
-
sub backup
{
my ($self, $dir) = @_;
Modified: DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class/Storage/DBI.pm
===================================================================
--- DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class/Storage/DBI.pm 2010-02-14 10:41:14 UTC (rev 8703)
+++ DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class/Storage/DBI.pm 2010-02-14 10:46:07 UTC (rev 8704)
@@ -16,11 +16,6 @@
use Data::Dumper::Concise();
use Sub::Name ();
-# what version of sqlt do we require if deploy() without a ddl_dir is invoked
-# when changing also adjust the corresponding author_require in Makefile.PL
-my $minimum_sqlt_version = '0.11002';
-
-
__PACKAGE__->mk_group_accessors('simple' =>
qw/_connect_info _dbi_connect_info _dbh _sql_maker _sql_maker_opts _conn_pid
_conn_tid transaction_depth _dbh_autocommit _driver_determined savepoints/
@@ -2053,18 +2048,14 @@
=cut
sub _dbh_last_insert_id {
- # All Storage's need to register their own _dbh_last_insert_id
- # the old SQLite-based method was highly inappropriate
+ my ($self, $dbh, $source, $col) = @_;
- my $self = shift;
- my $class = ref $self;
- $self->throw_exception (<<EOE);
+ my $id = eval { $dbh->last_insert_id (undef, undef, $source->name, $col) };
-No _dbh_last_insert_id() method found in $class.
-Since the method of obtaining the autoincrement id of the last insert
-operation varies greatly between different databases, this method must be
-individually implemented for every storage class.
-EOE
+ return $id if defined $id;
+
+ my $class = ref $self;
+ $self->throw_exception ("No storage specific _dbh_last_insert_id() method implemented in $class, and the generic DBI::last_insert_id() failed");
}
sub last_insert_id {
@@ -2255,8 +2246,9 @@
%{$sqltargs || {}}
};
- $self->throw_exception("Can't create a ddl file without SQL::Translator: " . $self->_sqlt_version_error)
- if !$self->_sqlt_version_ok;
+ unless (DBIx::Class::Optional::Dependencies->req_ok_for ('deploy')) {
+ $self->throw_exception("Can't create a ddl file without " . DBIx::Class::Optional::Dependencies->req_missing_for ('deploy') );
+ }
my $sqlt = SQL::Translator->new( $sqltargs );
@@ -2398,8 +2390,9 @@
return join('', @rows);
}
- $self->throw_exception("Can't deploy without either SQL::Translator or a ddl_dir: " . $self->_sqlt_version_error )
- if !$self->_sqlt_version_ok;
+ unless (DBIx::Class::Optional::Dependencies->req_ok_for ('deploy') ) {
+ $self->throw_exception("Can't deploy without a ddl_dir or " . DBIx::Class::Optional::Dependencies->req_missing_for ('deploy') );
+ }
# sources needs to be a parser arg, but for simplicty allow at top level
# coming in
@@ -2523,33 +2516,6 @@
return;
}
-# SQLT version handling
-{
- my $_sqlt_version_ok; # private
- my $_sqlt_version_error; # private
-
- sub _sqlt_version_ok {
- if (!defined $_sqlt_version_ok) {
- eval "use SQL::Translator $minimum_sqlt_version";
- if ($@) {
- $_sqlt_version_ok = 0;
- $_sqlt_version_error = $@;
- }
- else {
- $_sqlt_version_ok = 1;
- }
- }
- return $_sqlt_version_ok;
- }
-
- sub _sqlt_version_error {
- shift->_sqlt_version_ok unless defined $_sqlt_version_ok;
- return $_sqlt_version_error;
- }
-
- sub _sqlt_minimum_version { $minimum_sqlt_version };
-}
-
=head2 relname_to_table_alias
=over 4
Modified: DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class/UTF8Columns.pm
===================================================================
--- DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class/UTF8Columns.pm 2010-02-14 10:41:14 UTC (rev 8703)
+++ DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class/UTF8Columns.pm 2010-02-14 10:46:07 UTC (rev 8704)
@@ -113,7 +113,8 @@
# override this if you want to force everything to be encoded/decoded
sub _is_utf8_column {
- return (shift->utf8_columns || {})->{shift @_};
+ # my ($self, $col) = @_;
+ return ($_[0]->utf8_columns || {})->{$_[1]};
}
=head1 AUTHORS
Modified: DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class.pm
===================================================================
--- DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class.pm 2010-02-14 10:41:14 UTC (rev 8703)
+++ DBIx-Class/0.08/branches/prefetch/lib/DBIx/Class.pm 2010-02-14 10:46:07 UTC (rev 8704)
@@ -6,6 +6,8 @@
use MRO::Compat;
use mro 'c3';
+use DBIx::Class::Optional::Dependencies;
+
use vars qw($VERSION);
use base qw/DBIx::Class::Componentised Class::Accessor::Grouped/;
use DBIx::Class::StartupCheck;
@@ -25,7 +27,7 @@
# Always remember to do all digits for the version even if they're 0
# i.e. first release of 0.XX *must* be 0.XX000. This avoids fBSD ports
# brain damage and presumably various other packaging systems too
-$VERSION = '0.08117_01';
+$VERSION = '0.08118_01';
$VERSION = eval $VERSION; # numify for warning-free dev releases
@@ -245,6 +247,8 @@
debolaz: Anders Nor Berle <berle at cpan.org>
+dew: Dan Thomas <dan at godders.org>
+
dkubb: Dan Kubb <dan.kubb-cpan at onautopilot.com>
dnm: Justin Wheeler <jwheeler at datademons.com>
@@ -255,6 +259,8 @@
frew: Arthur Axel "fREW" Schmidt <frioux at gmail.com>
+goraxe: Gordon Irving <goraxe at cpan.org>
+
gphat: Cory G Watson <gphat at cpan.org>
groditi: Guillermo Roditi <groditi at cpan.org>
Modified: DBIx-Class/0.08/branches/prefetch/script/dbicadmin
===================================================================
--- DBIx-Class/0.08/branches/prefetch/script/dbicadmin 2010-02-14 10:41:14 UTC (rev 8703)
+++ DBIx-Class/0.08/branches/prefetch/script/dbicadmin 2010-02-14 10:46:07 UTC (rev 8704)
@@ -1,221 +1,95 @@
#!/usr/bin/perl
+
use strict;
use warnings;
-use Getopt::Long;
-use Pod::Usage;
-use JSON::Any;
-
-
-my $json = JSON::Any->new(allow_barekey => 1, allow_singlequote => 1);
-
-GetOptions(
- 'schema=s' => \my $schema_class,
- 'class=s' => \my $resultset_class,
- 'connect=s' => \my $connect,
- 'op=s' => \my $op,
- 'set=s' => \my $set,
- 'where=s' => \my $where,
- 'attrs=s' => \my $attrs,
- 'format=s' => \my $format,
- 'force' => \my $force,
- 'trace' => \my $trace,
- 'quiet' => \my $quiet,
- 'help' => \my $help,
- 'tlibs' => \my $t_libs,
-);
-
-if ($t_libs) {
- unshift( @INC, 't/lib', 'lib' );
+BEGIN {
+ use DBIx::Class;
+ die ( "The following modules are required for the dbicadmin utility\n"
+ . DBIx::Class::Optional::Dependencies->req_missing_for ('admin_script')
+ ) unless DBIx::Class::Optional::Dependencies->req_ok_for ('admin_script');
}
-pod2usage(1) if ($help);
-$ENV{DBIC_TRACE} = 1 if ($trace);
+use Getopt::Long::Descriptive;
+use DBIx::Class::Admin;
-die('No op specified') if(!$op);
-die('Invalid op') if ($op!~/^insert|update|delete|select$/s);
-my $csv_class;
-if ($op eq 'select') {
- $format ||= 'tsv';
- die('Invalid format') if ($format!~/^tsv|csv$/s);
- $csv_class = 'Text::CSV_XS';
- eval{ require Text::CSV_XS };
- if ($@) {
- $csv_class = 'Text::CSV_PP';
- eval{ require Text::CSV_PP };
- die('The select op requires either the Text::CSV_XS or the Text::CSV_PP module') if ($@);
- }
-}
-
-die('No schema specified') if(!$schema_class);
-eval("require $schema_class");
-die('Unable to load schema') if ($@);
-$connect = $json->jsonToObj( $connect ) if ($connect);
-my $schema = $schema_class->connect(
- ( $connect ? @$connect : () )
+my ($opts, $usage) = describe_options(
+ "%c: %o",
+ (
+ ['Actions'],
+ ["action" => hidden => { one_of => [
+ ['create|c' => 'Create version diffs needs preversion',],
+ ['upgrade|u' => 'Upgrade the database to the current schema '],
+ ['install|i' => 'Install the schema to the database',],
+ ['deploy|d' => 'Deploy the schema to the database',],
+ ['select|s' => 'Select data from the schema', ],
+ ['insert|i' => 'Insert data into the schema', ],
+ ['update|u' => 'Update data in the schema', ],
+ ['delete|D' => 'Delete data from the schema',],
+ ['op:s' => 'compatiblity option all of the above can be suppied as --op=<action>'],
+ ['help|h' => 'display this help'],
+ ], required=> 1 }],
+ ['Options'],
+ ['schema-class|schema|C:s' => 'The class of the schema to load', { required => 1 } ],
+ ['resultset|resultset_class|class|r:s' => 'The resultset to operate on for data manipulation' ],
+ ['config-stanza|S:s' => 'Where in the config to find the connection_info, supply in form MyApp::Model::DB',],
+ ['config|f:s' => 'Supply the config file for parsing by Config::Any', { depends => 'config_stanza'} ],
+ ['connect-info|n:s%' => 'Supply the connect info as additonal options ie -I dsn=<dsn> user=<user> password=<pass> '],
+ ['connect:s' => 'Supply the connect info as a json string' ],
+ ['sql-dir|q:s' => 'The directory where sql diffs will be created'],
+ ['sql-type|t:s' => 'The RDBMs flavour you wish to use'],
+ ['version|v:i' => 'Supply a version install'],
+ ['preversion|p:s' => 'The previous version to diff against',],
+ ['set:s' => 'JSON data used to perform data operations' ],
+ ['lib|I:s' => 'Additonal library path to search in'],
+ ['attrs:s' => 'JSON string to be used for the second argument for search'],
+ ['where:s' => 'JSON string to be used for the where clause of search'],
+ ['force' => 'Be forceful with some operations'],
+ ['trace' => 'Turn on DBIx::Class trace output'],
+ ['quiet' => 'Be less verbose'],
+ )
);
-die('No class specified') if(!$resultset_class);
-my $resultset = eval{ $schema->resultset($resultset_class) };
-die('Unable to load the class with the schema') if ($@);
+die "please only use one of --config or --connect-info\n" if ($opts->{config} and $opts->{connect_info});
-$set = $json->jsonToObj( $set ) if ($set);
-$where = $json->jsonToObj( $where ) if ($where);
-$attrs = $json->jsonToObj( $attrs ) if ($attrs);
-
-if ($op eq 'insert') {
- die('Do not use the where option with the insert op') if ($where);
- die('Do not use the attrs option with the insert op') if ($attrs);
- my $obj = $resultset->create( $set );
- print ''.ref($resultset).' ID: '.join(',',$obj->id())."\n" if (!$quiet);
+# option compatability mangle
+if($opts->{connect}) {
+ $opts->{connect_info} = delete $opts->{connect};
}
-elsif ($op eq 'update') {
- $resultset = $resultset->search( ($where||{}) );
- my $count = $resultset->count();
- print "This action will modify $count ".ref($resultset)." records.\n" if (!$quiet);
- if ( $force || confirm() ) {
- $resultset->update_all( $set );
- }
-}
-elsif ($op eq 'delete') {
- die('Do not use the set option with the delete op') if ($set);
- $resultset = $resultset->search( ($where||{}), ($attrs||()) );
- my $count = $resultset->count();
- print "This action will delete $count ".ref($resultset)." records.\n" if (!$quiet);
- if ( $force || confirm() ) {
- $resultset->delete_all();
- }
-}
-elsif ($op eq 'select') {
- die('Do not use the set option with the select op') if ($set);
- my $csv = $csv_class->new({
- sep_char => ( $format eq 'tsv' ? "\t" : ',' ),
- });
- $resultset = $resultset->search( ($where||{}), ($attrs||()) );
- my @columns = $resultset->result_source->columns();
- $csv->combine( @columns );
- print $csv->string()."\n";
- while (my $row = $resultset->next()) {
- my @fields;
- foreach my $column (@columns) {
- push( @fields, $row->get_column($column) );
- }
- $csv->combine( @fields );
- print $csv->string()."\n";
- }
-}
-sub confirm {
- print "Are you sure you want to do this? (type YES to confirm) ";
- my $response = <STDIN>;
- return 1 if ($response=~/^YES/);
- return;
-}
+my $admin = DBIx::Class::Admin->new( %$opts );
-__END__
-=head1 NAME
+my $action = $opts->{action};
-dbicadmin - Execute operations upon DBIx::Class objects.
+$action = $opts->{op} if ($action eq 'op');
-=head1 SYNOPSIS
+print "Performig action $action...\n";
- dbicadmin --op=insert --schema=My::Schema --class=Class --set=JSON
- dbicadmin --op=update --schema=My::Schema --class=Class --set=JSON --where=JSON
- dbicadmin --op=delete --schema=My::Schema --class=Class --where=JSON
- dbicadmin --op=select --schema=My::Schema --class=Class --where=JSON --format=tsv
+my $res = $admin->$action();
+if ($action eq 'select') {
-=head1 DESCRIPTION
+ my $format = $opts->{format} || 'tsv';
+ die('Invalid format') if ($format!~/^tsv|csv$/s);
-This utility provides the ability to run INSERTs, UPDATEs,
-DELETEs, and SELECTs on any DBIx::Class object.
+ require Text::CSV;
-=head1 OPTIONS
+ my $csv = Text::CSV->new({
+ sep_char => ( $format eq 'tsv' ? "\t" : ',' ),
+ });
-=head2 op
+ foreach my $row (@$res) {
+ $csv->combine( @$row );
+ print $csv->string()."\n";
+ }
+}
-The type of operation. Valid values are insert, update, delete,
-and select.
-
-=head2 schema
-
-The name of your schema class.
-
-=head2 class
-
-The name of the class, within your schema, that you want to run
-the operation on.
-
-=head2 connect
-
-A JSON array to be passed to your schema class upon connecting.
-The array will need to be compatible with whatever the DBIC
-->connect() method requires.
-
-=head2 set
-
-This option must be valid JSON data string and is passed in to
-the DBIC update() method. Use this option with the update
-and insert ops.
-
-=head2 where
-
-This option must be valid JSON data string and is passed in as
-the first argument to the DBIC search() method. Use this
-option with the update, delete, and select ops.
-
-=head2 attrs
-
-This option must be valid JSON data string and is passed in as
-the second argument to the DBIC search() method. Use this
-option with the update, delete, and select ops.
-
-=head2 help
-
-Display this help page.
-
-=head2 force
-
-Suppresses the confirmation dialogues that are usually displayed
-when someone runs a DELETE or UPDATE action.
-
-=head2 quiet
-
-Do not display status messages.
-
-=head2 trace
-
-Turns on tracing on the DBI storage, thus printing SQL as it is
-executed.
-
-=head2 tlibs
-
-This option is purely for testing during the DBIC installation. Do
-not use it.
-
-=head1 JSON
-
-JSON is a lightweight data-interchange format. It allows you
-to express complex data structures for use in the where and
-set options.
-
-This module turns on L<JSON>'s BareKey and QuotApos options so
-that your data can look a bit more readable.
-
- --where={"this":"that"} # generic JSON
- --where={this:'that'} # with BareKey and QuoteApos
-
-Consider wrapping your JSON in outer quotes so that you don't
-have to escape your inner quotes.
-
- --where={this:\"that\"} # no outer quote
- --where='{this:"that"}' # outer quoted
-
=head1 AUTHOR
-Aran Deltac <bluefeet at cpan.org>
+See L<DBIx::Class/CONTRIBUTORS>.
=head1 LICENSE
-You may distribute this code under the same terms as Perl itself.
+You may distribute this code under the same terms as Perl itself
+=cut
Modified: DBIx-Class/0.08/branches/prefetch/t/03podcoverage.t
===================================================================
--- DBIx-Class/0.08/branches/prefetch/t/03podcoverage.t 2010-02-14 10:41:14 UTC (rev 8703)
+++ DBIx-Class/0.08/branches/prefetch/t/03podcoverage.t 2010-02-14 10:46:07 UTC (rev 8704)
@@ -86,6 +86,14 @@
/]
},
+ 'DBIx::Class::Storage::DBI::Replicated*' => {
+ ignore => [ qw/
+ connect_call_do_sql
+ disconnect_call_do_sql
+ /]
+ },
+
+ 'DBIx::Class::Admin::Types' => { skip => 1 },
'DBIx::Class::ClassResolver::PassThrough' => { skip => 1 },
'DBIx::Class::Componentised' => { skip => 1 },
'DBIx::Class::Relationship::*' => { skip => 1 },
@@ -95,7 +103,6 @@
'DBIx::Class::Storage::DBI::Replicated::Types' => { skip => 1 },
# test some specific components whose parents are exempt below
- 'DBIx::Class::Storage::DBI::Replicated*' => {},
'DBIx::Class::Relationship::Base' => {},
# internals
Added: DBIx-Class/0.08/branches/prefetch/t/10optional_deps.t
===================================================================
--- DBIx-Class/0.08/branches/prefetch/t/10optional_deps.t (rev 0)
+++ DBIx-Class/0.08/branches/prefetch/t/10optional_deps.t 2010-02-14 10:46:07 UTC (rev 8704)
@@ -0,0 +1,70 @@
+use strict;
+use warnings;
+no warnings qw/once/;
+
+use Test::More;
+use lib qw(t/lib);
+use Scalar::Util; # load before we break require()
+
+use_ok 'DBIx::Class::Optional::Dependencies';
+
+my $sqlt_dep = DBIx::Class::Optional::Dependencies->req_list_for ('deploy');
+is_deeply (
+ [ keys %$sqlt_dep ],
+ [ 'SQL::Translator' ],
+ 'Correct deploy() dependency list',
+);
+
+# make module loading impossible, regardless of actual libpath contents
+ at INC = (sub { die('Optional Dep Test') } );
+
+ok (
+ ! DBIx::Class::Optional::Dependencies->req_ok_for ('deploy'),
+ 'deploy() deps missing',
+);
+
+like (
+ DBIx::Class::Optional::Dependencies->req_missing_for ('deploy'),
+ qr/^SQL::Translator \>\= \d/,
+ 'expected missing string contents',
+);
+
+like (
+ DBIx::Class::Optional::Dependencies->req_errorlist_for ('deploy')->{'SQL::Translator'},
+ qr/Optional Dep Test/,
+ 'custom exception found in errorlist',
+);
+
+
+#make it so module appears loaded
+$INC{'SQL/Translator.pm'} = 1;
+$SQL::Translator::VERSION = 999;
+
+ok (
+ ! DBIx::Class::Optional::Dependencies->req_ok_for ('deploy'),
+ 'deploy() deps missing cached properly',
+);
+
+#reset cache
+%DBIx::Class::Optional::Dependencies::req_availability_cache = ();
+
+
+ok (
+ DBIx::Class::Optional::Dependencies->req_ok_for ('deploy'),
+ 'deploy() deps present',
+);
+
+is (
+ DBIx::Class::Optional::Dependencies->req_missing_for ('deploy'),
+ '',
+ 'expected null missing string',
+);
+
+is_deeply (
+ DBIx::Class::Optional::Dependencies->req_errorlist_for ('deploy'),
+ {},
+ 'expected empty errorlist',
+);
+
+
+done_testing;
Modified: DBIx-Class/0.08/branches/prefetch/t/749sybase_asa.t
===================================================================
--- DBIx-Class/0.08/branches/prefetch/t/749sybase_asa.t 2010-02-14 10:41:14 UTC (rev 8703)
+++ DBIx-Class/0.08/branches/prefetch/t/749sybase_asa.t 2010-02-14 10:46:07 UTC (rev 8704)
@@ -28,7 +28,9 @@
next unless $dsn;
- my $schema = DBICTest::Schema->connect($dsn, $user, $pass);
+ my $schema = DBICTest::Schema->connect($dsn, $user, $pass, {
+ auto_savepoint => 1
+ });
my $dbh = $schema->storage->dbh;
@@ -58,6 +60,28 @@
$new->discard_changes;
is($new->artistid, 66, 'Explicit PK assigned');
+# test savepoints
+ eval {
+ $schema->txn_do(sub {
+ eval {
+ $schema->txn_do(sub {
+ $ars->create({ name => 'in_savepoint' });
+ die "rolling back savepoint";
+ });
+ };
+ ok ((not $ars->search({ name => 'in_savepoint' })->first),
+ 'savepoint rolled back');
+ $ars->create({ name => 'in_outer_txn' });
+ die "rolling back outer txn";
+ });
+ };
+
+ like $@, qr/rolling back outer txn/,
+ 'correct exception for rollback';
+
+ ok ((not $ars->search({ name => 'in_outer_txn' })->first),
+ 'outer txn rolled back');
+
# test populate
lives_ok (sub {
my @pop;
Modified: DBIx-Class/0.08/branches/prefetch/t/76select.t
===================================================================
--- DBIx-Class/0.08/branches/prefetch/t/76select.t 2010-02-14 10:41:14 UTC (rev 8703)
+++ DBIx-Class/0.08/branches/prefetch/t/76select.t 2010-02-14 10:46:07 UTC (rev 8704)
@@ -157,10 +157,9 @@
$sub_rs->single,
{
artist => 1,
- track_position => 2,
- tracks => {
+ tracks => {
+ title => 'Apiary',
trackid => 17,
- title => 'Apiary',
},
},
'columns/select/as fold properly on sub-searches',
Modified: DBIx-Class/0.08/branches/prefetch/t/86sqlt.t
===================================================================
--- DBIx-Class/0.08/branches/prefetch/t/86sqlt.t 2010-02-14 10:41:14 UTC (rev 8703)
+++ DBIx-Class/0.08/branches/prefetch/t/86sqlt.t 2010-02-14 10:46:07 UTC (rev 8704)
@@ -6,10 +6,10 @@
use DBICTest;
BEGIN {
- require DBIx::Class::Storage::DBI;
+ require DBIx::Class;
plan skip_all =>
- 'Test needs SQL::Translator ' . DBIx::Class::Storage::DBI->_sqlt_minimum_version
- if not DBIx::Class::Storage::DBI->_sqlt_version_ok;
+ 'Test needs ' . DBIx::Class::Optional::Dependencies->req_missing_for ('deploy')
+ unless DBIx::Class::Optional::Dependencies->req_ok_for ('deploy')
}
my $schema = DBICTest->init_schema (no_deploy => 1);
Deleted: DBIx-Class/0.08/branches/prefetch/t/89dbicadmin.t
===================================================================
--- DBIx-Class/0.08/branches/prefetch/t/89dbicadmin.t 2010-02-14 10:41:14 UTC (rev 8703)
+++ DBIx-Class/0.08/branches/prefetch/t/89dbicadmin.t 2010-02-14 10:46:07 UTC (rev 8704)
@@ -1,89 +0,0 @@
-# vim: filetype=perl
-use strict;
-use warnings;
-
-use Test::More;
-use lib qw(t/lib);
-use DBICTest;
-
-
-eval 'require JSON::Any';
-plan skip_all => 'Install JSON::Any to run this test' if ($@);
-
-eval 'require Text::CSV_XS';
-if ($@) {
- eval 'require Text::CSV_PP';
- plan skip_all => 'Install Text::CSV_XS or Text::CSV_PP to run this test' if ($@);
-}
-
-my @json_backends = qw/XS JSON DWIW/;
-my $tests_per_run = 5;
-
-plan tests => $tests_per_run * @json_backends;
-
-for my $js (@json_backends) {
-
- eval {JSON::Any->import ($js) };
- SKIP: {
- skip ("Json backend $js is not available, skip testing", $tests_per_run) if $@;
-
- $ENV{JSON_ANY_ORDER} = $js;
- eval { test_dbicadmin () };
- diag $@ if $@;
- }
-}
-
-sub test_dbicadmin {
- my $schema = DBICTest->init_schema( sqlite_use_file => 1 ); # reinit a fresh db for every run
-
- my $employees = $schema->resultset('Employee');
-
- system( _prepare_system_args( qw|--op=insert --set={"name":"Matt"}| ) );
- ok( ($employees->count()==1), "$ENV{JSON_ANY_ORDER}: insert count" );
-
- my $employee = $employees->find(1);
- ok( ($employee->name() eq 'Matt'), "$ENV{JSON_ANY_ORDER}: insert valid" );
-
- system( _prepare_system_args( qw|--op=update --set={"name":"Trout"}| ) );
- $employee = $employees->find(1);
- ok( ($employee->name() eq 'Trout'), "$ENV{JSON_ANY_ORDER}: update" );
-
- system( _prepare_system_args( qw|--op=insert --set={"name":"Aran"}| ) );
-
- SKIP: {
- skip ("MSWin32 doesn't support -| either", 1) if $^O eq 'MSWin32';
-
- open(my $fh, "-|", _prepare_system_args( qw|--op=select --attrs={"order_by":"name"}| ) ) or die $!;
- my $data = do { local $/; <$fh> };
- close($fh);
- ok( ($data=~/Aran.*Trout/s), "$ENV{JSON_ANY_ORDER}: select with attrs" );
- }
-
- system( _prepare_system_args( qw|--op=delete --where={"name":"Trout"}| ) );
- ok( ($employees->count()==1), "$ENV{JSON_ANY_ORDER}: delete" );
-}
-
-# Why do we need this crap? Apparently MSWin32 can not pass through quotes properly
-# (sometimes it will and sometimes not, depending on what compiler was used to build
-# perl). So we go the extra mile to escape all the quotes. We can't also use ' instead
-# of ", because JSON::XS (proudly) does not support "malformed JSON" as the author
-# calls it. Bleh.
-#
-sub _prepare_system_args {
- my $perl = $^X;
- my @args = (
- qw|script/dbicadmin --quiet --schema=DBICTest::Schema --class=Employee --tlibs|,
- q|--connect=["dbi:SQLite:dbname=t/var/DBIxClass.db","","",{"AutoCommit":1}]|,
- qw|--force --tlibs|,
- @_,
- );
-
- if ( $^O eq 'MSWin32' ) {
- $perl = qq|"$perl"|; # execution will fail if $^X contains paths
- for (@args) {
- $_ =~ s/"/\\"/g;
- }
- }
-
- return ($perl, @args);
-}
Modified: DBIx-Class/0.08/branches/prefetch/t/94versioning.t
===================================================================
--- DBIx-Class/0.08/branches/prefetch/t/94versioning.t 2010-02-14 10:41:14 UTC (rev 8703)
+++ DBIx-Class/0.08/branches/prefetch/t/94versioning.t 2010-02-14 10:46:07 UTC (rev 8704)
@@ -22,10 +22,10 @@
|| plan skip_all => 'Test needs Time::HiRes';
Time::HiRes->import(qw/time sleep/);
- require DBIx::Class::Storage::DBI;
+ require DBIx::Class;
plan skip_all =>
- 'Test needs SQL::Translator ' . DBIx::Class::Storage::DBI->_sqlt_minimum_version
- if not DBIx::Class::Storage::DBI->_sqlt_version_ok;
+ 'Test needs ' . DBIx::Class::Optional::Dependencies->req_missing_for ('deploy')
+ unless DBIx::Class::Optional::Dependencies->req_ok_for ('deploy')
}
use lib qw(t/lib);
Modified: DBIx-Class/0.08/branches/prefetch/t/99dbic_sqlt_parser.t
===================================================================
--- DBIx-Class/0.08/branches/prefetch/t/99dbic_sqlt_parser.t 2010-02-14 10:41:14 UTC (rev 8703)
+++ DBIx-Class/0.08/branches/prefetch/t/99dbic_sqlt_parser.t 2010-02-14 10:46:07 UTC (rev 8704)
@@ -9,10 +9,10 @@
use Scalar::Util ();
BEGIN {
- require DBIx::Class::Storage::DBI;
+ require DBIx::Class;
plan skip_all =>
- 'Test needs SQL::Translator ' . DBIx::Class::Storage::DBI->_sqlt_minimum_version
- if not DBIx::Class::Storage::DBI->_sqlt_version_ok;
+ 'Test needs ' . DBIx::Class::Optional::Dependencies->req_missing_for ('deploy')
+ unless DBIx::Class::Optional::Dependencies->req_ok_for ('deploy')
}
# Test for SQLT-related leaks
Added: DBIx-Class/0.08/branches/prefetch/t/admin/01load.t
===================================================================
--- DBIx-Class/0.08/branches/prefetch/t/admin/01load.t (rev 0)
+++ DBIx-Class/0.08/branches/prefetch/t/admin/01load.t 2010-02-14 10:46:07 UTC (rev 8704)
@@ -0,0 +1,15 @@
+use strict;
+use warnings;
+
+use Test::More;
+
+BEGIN {
+ require DBIx::Class;
+ plan skip_all => 'Test needs ' . DBIx::Class::Optional::Dependencies->req_missing_for('admin')
+ unless DBIx::Class::Optional::Dependencies->req_ok_for('admin');
+}
+
+use_ok 'DBIx::Class::Admin';
+
+
+done_testing;
Added: DBIx-Class/0.08/branches/prefetch/t/admin/02ddl.t
===================================================================
--- DBIx-Class/0.08/branches/prefetch/t/admin/02ddl.t (rev 0)
+++ DBIx-Class/0.08/branches/prefetch/t/admin/02ddl.t 2010-02-14 10:46:07 UTC (rev 8704)
@@ -0,0 +1,130 @@
+use strict;
+use warnings;
+
+use Test::More;
+use Test::Exception;
+use Test::Warn;
+
+BEGIN {
+ require DBIx::Class;
+ plan skip_all => 'Test needs ' . DBIx::Class::Optional::Dependencies->req_missing_for('admin')
+ unless DBIx::Class::Optional::Dependencies->req_ok_for('admin');
+
+ plan skip_all => 'Test needs ' . DBIx::Class::Optional::Dependencies->req_missing_for('deploy')
+ unless DBIx::Class::Optional::Dependencies->req_ok_for('deploy');
+}
+
+use lib qw(t/lib);
+use DBICTest;
+
+use Path::Class;
+
+use ok 'DBIx::Class::Admin';
+
+
+my $sql_dir = dir(qw/t var/);
+my @connect_info = DBICTest->_database(
+ no_deploy=>1,
+ no_populate=>1,
+ sqlite_use_file => 1,
+);
+{ # create the schema
+
+# make sure we are clean
+clean_dir($sql_dir);
+
+
+my $admin = DBIx::Class::Admin->new(
+ schema_class=> "DBICTest::Schema",
+ sql_dir=> $sql_dir,
+ connect_info => \@connect_info,
+);
+isa_ok ($admin, 'DBIx::Class::Admin', 'create the admin object');
+lives_ok { $admin->create('MySQL'); } 'Can create MySQL sql';
+lives_ok { $admin->create('SQLite'); } 'Can Create SQLite sql';
+}
+
+{ # upgrade schema
+
+#my $schema = DBICTest->init_schema(
+# no_deploy => 1,
+# no_populat => 1,
+# sqlite_use_file => 1,
+#);
+
+clean_dir($sql_dir);
+require DBICVersion_v1;
+
+my $admin = DBIx::Class::Admin->new(
+ schema_class => 'DBICVersion::Schema',
+ sql_dir => $sql_dir,
+ connect_info => \@connect_info,
+);
+
+my $schema = $admin->schema();
+
+lives_ok { $admin->create($schema->storage->sqlt_type(), {add_drop_table=>0}); } 'Can create DBICVersionOrig sql in ' . $schema->storage->sqlt_type;
+lives_ok { $admin->deploy( ) } 'Can Deploy schema';
+
+# connect to now deployed schema
+lives_ok { $schema = DBICVersion::Schema->connect(@{$schema->storage->connect_info()}); } 'Connect to deployed Database';
+
+is($schema->get_db_version, $DBICVersion::Schema::VERSION, 'Schema deployed and versions match');
+
+
+require DBICVersion_v2;
+
+$admin = DBIx::Class::Admin->new(
+ schema_class => 'DBICVersion::Schema',
+ sql_dir => $sql_dir,
+ connect_info => \@connect_info
+);
+
+lives_ok { $admin->create($schema->storage->sqlt_type(), {}, "1.0" ); } 'Can create diff for ' . $schema->storage->sqlt_type;
+{
+ local $SIG{__WARN__} = sub { warn $_[0] unless $_[0] =~ /DB version .+? is lower than the schema version/ };
+ lives_ok {$admin->upgrade();} 'upgrade the schema';
+}
+
+is($schema->get_db_version, $DBICVersion::Schema::VERSION, 'Schema and db versions match');
+
+}
+
+{ # install
+
+clean_dir($sql_dir);
+
+my $admin = DBIx::Class::Admin->new(
+ schema_class => 'DBICVersion::Schema',
+ sql_dir => $sql_dir,
+ _confirm => 1,
+ connect_info => \@connect_info,
+);
+
+$admin->version("3.0");
+lives_ok { $admin->install(); } 'install schema version 3.0';
+is($admin->schema->get_db_version, "3.0", 'db thinks its version 3.0');
+dies_ok { $admin->install("4.0"); } 'cannot install to allready existing version';
+
+$admin->force(1);
+warnings_exist ( sub {
+ lives_ok { $admin->install("4.0") } 'can force install to allready existing version'
+}, qr/Forcing install may not be a good idea/, 'Force warning emitted' );
+is($admin->schema->get_db_version, "4.0", 'db thinks its version 4.0');
+#clean_dir($sql_dir);
+}
+
+sub clean_dir {
+ my ($dir) = @_;
+ $dir = $dir->resolve;
+ if ( ! -d $dir ) {
+ $dir->mkpath();
+ }
+ foreach my $file ($dir->children) {
+ # skip any hidden files
+ next if ($file =~ /^\./);
+ unlink $file;
+ }
+}
+
+done_testing;
Added: DBIx-Class/0.08/branches/prefetch/t/admin/03data.t
===================================================================
--- DBIx-Class/0.08/branches/prefetch/t/admin/03data.t (rev 0)
+++ DBIx-Class/0.08/branches/prefetch/t/admin/03data.t 2010-02-14 10:46:07 UTC (rev 8704)
@@ -0,0 +1,65 @@
+use strict;
+use warnings;
+
+use Test::More;
+
+use Test::Exception;
+use Test::Deep;
+
+BEGIN {
+ require DBIx::Class;
+ plan skip_all => 'Test needs ' . DBIx::Class::Optional::Dependencies->req_missing_for('admin')
+ unless DBIx::Class::Optional::Dependencies->req_ok_for('admin');
+}
+
+use lib 't/lib';
+use DBICTest;
+
+use ok 'DBIx::Class::Admin';
+
+
+{ # test data maniplulation functions
+
+ # create a DBICTest so we can steal its connect info
+ my $schema = DBICTest->init_schema(
+ sqlite_use_file => 1,
+ );
+
+ my $admin = DBIx::Class::Admin->new(
+ schema_class=> "DBICTest::Schema",
+ connect_info => $schema->storage->connect_info(),
+ quiet => 1,
+ _confirm=>1,
+ );
+ isa_ok ($admin, 'DBIx::Class::Admin', 'create the admin object');
+
+ $admin->insert('Employee', { name => 'Matt' });
+ my $employees = $schema->resultset('Employee');
+ is ($employees->count(), 1, "insert okay" );
+
+ my $employee = $employees->find(1);
+ is($employee->name(), 'Matt', "insert valid" );
+
+ $admin->update('Employee', {name => 'Trout'}, {name => 'Matt'});
+
+ $employee = $employees->find(1);
+ is($employee->name(), 'Trout', "update Matt to Trout" );
+
+ $admin->insert('Employee', {name =>'Aran'});
+
+ my $expected_data = [
+ [$employee->result_source->columns() ],
+ [1,1,undef,undef,undef,'Trout'],
+ [2,2,undef,undef,undef,'Aran']
+ ];
+ my $data;
+ lives_ok { $data = $admin->select('Employee')} 'can retrive data from database';
+ cmp_deeply($data, $expected_data, 'DB matches whats expected');
+
+ $admin->delete('Employee', {name=>'Trout'});
+ my $del_rs = $employees->search({name => 'Trout'});
+ is($del_rs->count(), 0, "delete Trout" );
+ is ($employees->count(), 1, "left Aran" );
+}
+
+done_testing;
Copied: DBIx-Class/0.08/branches/prefetch/t/admin/10script.t (from rev 7451, DBIx-Class/0.08/branches/prefetch/t/89dbicadmin.t)
===================================================================
--- DBIx-Class/0.08/branches/prefetch/t/admin/10script.t (rev 0)
+++ DBIx-Class/0.08/branches/prefetch/t/admin/10script.t 2010-02-14 10:46:07 UTC (rev 8704)
@@ -0,0 +1,91 @@
+# vim: filetype=perl
+use strict;
+use warnings;
+
+use Test::More;
+use Config;
+use lib qw(t/lib);
+$ENV{PERL5LIB} = join ($Config{path_sep}, @INC);
+use DBICTest;
+
+
+BEGIN {
+ require DBIx::Class;
+ plan skip_all => 'Test needs ' . DBIx::Class::Optional::Dependencies->req_missing_for('admin_script')
+ unless DBIx::Class::Optional::Dependencies->req_ok_for('admin_script');
+}
+
+my @json_backends = qw/XS JSON DWIW/;
+my $tests_per_run = 5;
+
+plan tests => $tests_per_run * @json_backends;
+
+for my $js (@json_backends) {
+
+ eval {JSON::Any->import ($js) };
+ SKIP: {
+ skip ("Json backend $js is not available, skip testing", $tests_per_run) if $@;
+
+ $ENV{JSON_ANY_ORDER} = $js;
+ eval { test_dbicadmin () };
+ diag $@ if $@;
+ }
+}
+
+sub test_dbicadmin {
+ my $schema = DBICTest->init_schema( sqlite_use_file => 1 ); # reinit a fresh db for every run
+
+ my $employees = $schema->resultset('Employee');
+
+ system( _prepare_system_args( qw|--op=insert --set={"name":"Matt"}| ) );
+ ok( ($employees->count()==1), "$ENV{JSON_ANY_ORDER}: insert count" );
+
+ my $employee = $employees->find(1);
+ ok( ($employee->name() eq 'Matt'), "$ENV{JSON_ANY_ORDER}: insert valid" );
+
+ system( _prepare_system_args( qw|--op=update --set={"name":"Trout"}| ) );
+ $employee = $employees->find(1);
+ ok( ($employee->name() eq 'Trout'), "$ENV{JSON_ANY_ORDER}: update" );
+
+ system( _prepare_system_args( qw|--op=insert --set={"name":"Aran"}| ) );
+
+ SKIP: {
+ skip ("MSWin32 doesn't support -| either", 1) if $^O eq 'MSWin32';
+
+ open(my $fh, "-|", _prepare_system_args( qw|--op=select --attrs={"order_by":"name"}| ) ) or die $!;
+ my $data = do { local $/; <$fh> };
+ close($fh);
+ if (!ok( ($data=~/Aran.*Trout/s), "$ENV{JSON_ANY_ORDER}: select with attrs" )) {
+ diag ("data from select is $data")
+ };
+ }
+
+ system( _prepare_system_args( qw|--op=delete --where={"name":"Trout"}| ) );
+ ok( ($employees->count()==1), "$ENV{JSON_ANY_ORDER}: delete" );
+}
+
+# Why do we need this crap? Apparently MSWin32 can not pass through quotes properly
+# (sometimes it will and sometimes not, depending on what compiler was used to build
+# perl). So we go the extra mile to escape all the quotes. We can't also use ' instead
+# of ", because JSON::XS (proudly) does not support "malformed JSON" as the author
+# calls it. Bleh.
+#
+sub _prepare_system_args {
+ my $perl = $^X;
+
+ my @args = (
+ qw|script/dbicadmin --quiet --schema=DBICTest::Schema --class=Employee|,
+ q|--connect=["dbi:SQLite:dbname=t/var/DBIxClass.db","","",{"AutoCommit":1}]|,
+ qw|--force|,
+ @_,
+ );
+
+ if ( $^O eq 'MSWin32' ) {
+ $perl = qq|"$perl"|; # execution will fail if $^X contains paths
+ for (@args) {
+ $_ =~ s/"/\\"/g;
+ }
+ }
+
+ return ($perl, @args);
+}
Modified: DBIx-Class/0.08/branches/prefetch/t/inflate/datetime_sybase_asa.t
===================================================================
--- DBIx-Class/0.08/branches/prefetch/t/inflate/datetime_sybase_asa.t 2010-02-14 10:41:14 UTC (rev 8703)
+++ DBIx-Class/0.08/branches/prefetch/t/inflate/datetime_sybase_asa.t 2010-02-14 10:46:07 UTC (rev 8704)
@@ -44,7 +44,7 @@
# coltype, col, date
my @dt_types = (
- ['TIMESTAMP', 'last_updated_at', '2004-08-21 14:36:48.080444'],
+ ['TIMESTAMP', 'last_updated_at', '2004-08-21 14:36:48.080445'],
# date only (but minute precision according to ASA docs)
['DATE', 'small_dt', '2004-08-21 00:00:00.000000'],
);
@@ -73,6 +73,9 @@
->first
);
is( $row->$col, $dt, 'DateTime roundtrip' );
+
+ is $row->$col->nanosecond, $dt->nanosecond,
+ 'nanoseconds survived' if 0+$dt->nanosecond;
}
}
Modified: DBIx-Class/0.08/branches/prefetch/t/lib/DBICTest/Schema/ForceForeign.pm
===================================================================
--- DBIx-Class/0.08/branches/prefetch/t/lib/DBICTest/Schema/ForceForeign.pm 2010-02-14 10:41:14 UTC (rev 8703)
+++ DBIx-Class/0.08/branches/prefetch/t/lib/DBICTest/Schema/ForceForeign.pm 2010-02-14 10:46:07 UTC (rev 8704)
@@ -12,30 +12,21 @@
# Normally this would not appear as a FK constraint
# since it uses the PK
-__PACKAGE__->might_have(
- 'artist_1', 'DBICTest::Schema::Artist', {
- 'foreign.artistid' => 'self.artist',
- }, {
- is_foreign_key_constraint => 1,
- },
+__PACKAGE__->might_have('artist_1', 'DBICTest::Schema::Artist',
+ { 'foreign.artistid' => 'self.artist' },
+ { is_foreign_key_constraint => 1 },
);
# Normally this would appear as a FK constraint
-__PACKAGE__->might_have(
- 'cd_1', 'DBICTest::Schema::CD', {
- 'foreign.cdid' => 'self.cd',
- }, {
- is_foreign_key_constraint => 0,
- },
+__PACKAGE__->might_have('cd_1', 'DBICTest::Schema::CD',
+ { 'foreign.cdid' => 'self.cd' },
+ { is_foreign_key_constraint => 0 },
);
# Normally this would appear as a FK constraint
-__PACKAGE__->belongs_to(
- 'cd_3', 'DBICTest::Schema::CD', {
- 'foreign.cdid' => 'self.cd',
- }, {
- is_foreign_key_constraint => 0,
- },
+__PACKAGE__->belongs_to('cd_3', 'DBICTest::Schema::CD',
+ { 'foreign.cdid' => 'self.cd' },
+ { is_foreign_key_constraint => 0 },
);
1;
Added: DBIx-Class/0.08/branches/prefetch/t/resultset/is_ordered.t
===================================================================
--- DBIx-Class/0.08/branches/prefetch/t/resultset/is_ordered.t (rev 0)
+++ DBIx-Class/0.08/branches/prefetch/t/resultset/is_ordered.t 2010-02-14 10:46:07 UTC (rev 8704)
@@ -0,0 +1,90 @@
+use strict;
+use warnings;
+
+use lib qw(t/lib);
+use Test::More;
+use Test::Exception;
+use DBICTest;
+
+my $schema = DBICTest->init_schema();
+my $rs = $schema->resultset('Artist');
+
+ok !$rs->is_ordered, 'vanilla resultset is not ordered';
+
+# Simple ordering with a single column
+{
+ my $ordered = $rs->search(undef, { order_by => 'artistid' });
+ ok $ordered->is_ordered, 'Simple column ordering detected by is_ordered';
+}
+
+# Hashref order direction
+{
+ my $ordered = $rs->search(undef, { order_by => { -desc => 'artistid' } });
+ ok $ordered->is_ordered, 'resultset with order direction is_ordered';
+}
+
+# Column ordering with literal SQL
+{
+ my $ordered = $rs->search(undef, { order_by => \'artistid DESC' });
+ ok $ordered->is_ordered, 'resultset with literal SQL is_ordered';
+}
+
+# Multiple column ordering
+{
+ my $ordered = $rs->search(undef, { order_by => ['artistid', 'name'] });
+ ok $ordered->is_ordered, 'ordering with multiple columns as arrayref is ordered';
+}
+
+# More complicated ordering
+{
+ my $ordered = $rs->search(undef, {
+ order_by => [
+ { -asc => 'artistid' },
+ { -desc => 'name' },
+ ]
+ });
+ ok $ordered->is_ordered, 'more complicated resultset ordering is_ordered';
+}
+
+# Empty multi-column ordering arrayref
+{
+ my $ordered = $rs->search(undef, { order_by => [] });
+ ok !$ordered->is_ordered, 'ordering with empty arrayref is not ordered';
+}
+
+# Multi-column ordering syntax with empty hashref
+{
+ my $ordered = $rs->search(undef, { order_by => [{}] });
+ ok !$ordered->is_ordered, 'ordering with [{}] is not ordered';
+}
+
+# Remove ordering after being set
+{
+ my $ordered = $rs->search(undef, { order_by => 'artistid' });
+ ok $ordered->is_ordered, 'resultset with ordering applied works..';
+ my $unordered = $ordered->search(undef, { order_by => undef });
+ ok !$unordered->is_ordered, '..and is not ordered with ordering removed';
+}
+
+# Search without ordering
+{
+ my $ordered = $rs->search({ name => 'We Are Goth' }, { join => 'cds' });
+ ok !$ordered->is_ordered, 'WHERE clause but no order_by is not ordered';
+}
+
+# Other functions without ordering
+{
+ # Join
+ my $joined = $rs->search(undef, { join => 'cds' });
+ ok !$joined->is_ordered, 'join but no order_by is not ordered';
+
+ # Group By
+ my $grouped = $rs->search(undef, { group_by => 'rank' });
+ ok !$grouped->is_ordered, 'group_by but no order_by is not ordered';
+
+ # Paging
+ my $paged = $rs->search(undef, { page=> 5 });
+ ok !$paged->is_ordered, 'paging but no order_by is not ordered';
+}
+
+done_testing;
Added: DBIx-Class/0.08/branches/prefetch/t/search/select_chains.t
===================================================================
--- DBIx-Class/0.08/branches/prefetch/t/search/select_chains.t (rev 0)
+++ DBIx-Class/0.08/branches/prefetch/t/search/select_chains.t 2010-02-14 10:46:07 UTC (rev 8704)
@@ -0,0 +1,61 @@
+use strict;
+use warnings;
+
+use Test::More;
+use Test::Exception;
+
+use lib qw(t/lib);
+use DBIC::SqlMakerTest;
+use DBICTest;
+
+
+my $schema = DBICTest->init_schema();
+
+my @chain = (
+ {
+ columns => [ 'cdid' ],
+ '+columns' => [ { title_lc => { lower => 'title' } } ],
+ '+select' => [ 'genreid' ],
+ '+as' => [ 'genreid' ],
+ } => 'SELECT me.cdid, LOWER( title ), me.genreid FROM cd me',
+
+ {
+ '+columns' => [ { max_year => { max => 'me.year' }}, ],
+ '+select' => [ { count => 'me.cdid' }, ],
+ '+as' => [ 'cnt' ],
+ } => 'SELECT me.cdid, LOWER( title ), MAX( me.year ), me.genreid, COUNT( me.cdid ) FROM cd me',
+
+ {
+ select => [ { min => 'me.cdid' }, ],
+ as => [ 'min_id' ],
+ } => 'SELECT MIN( me.cdid ) FROM cd me',
+
+ {
+ '+columns' => [ { cnt => { count => 'cdid' } } ],
+ } => 'SELECT MIN( me.cdid ), COUNT ( cdid ) FROM cd me',
+
+ {
+ columns => [ 'year' ],
+ } => 'SELECT me.year FROM cd me',
+);
+
+my $rs = $schema->resultset('CD');
+
+my $testno = 1;
+while (@chain) {
+ my $attrs = shift @chain;
+ my $sql = shift @chain;
+
+ $rs = $rs->search ({}, $attrs);
+
+ is_same_sql_bind (
+ $rs->as_query,
+ "($sql)",
+ [],
+ "Test $testno of SELECT assembly ok",
+ );
+
+ $testno++;
+}
+
+done_testing;
Copied: DBIx-Class/0.08/branches/prefetch/t/storage/replicated.t (from rev 7983, DBIx-Class/0.08/branches/prefetch/t/storage/replication.t)
===================================================================
--- DBIx-Class/0.08/branches/prefetch/t/storage/replicated.t (rev 0)
+++ DBIx-Class/0.08/branches/prefetch/t/storage/replicated.t 2010-02-14 10:46:07 UTC (rev 8704)
@@ -0,0 +1,907 @@
+use strict;
+use warnings;
+use lib qw(t/lib);
+use Test::More;
+use Test::Exception;
+use DBICTest;
+use List::Util 'first';
+use Scalar::Util 'reftype';
+use File::Spec;
+use IO::Handle;
+
+BEGIN {
+ eval { require Test::Moose; Test::Moose->import() };
+ plan skip_all => "Need Test::Moose to run this test" if $@;
+ require DBIx::Class;
+
+ plan skip_all => 'Test needs ' . DBIx::Class::Optional::Dependencies->req_missing_for ('replicated')
+ unless DBIx::Class::Optional::Dependencies->req_ok_for ('replicated');
+}
+
+use_ok 'DBIx::Class::Storage::DBI::Replicated::Pool';
+use_ok 'DBIx::Class::Storage::DBI::Replicated::Balancer';
+use_ok 'DBIx::Class::Storage::DBI::Replicated::Replicant';
+use_ok 'DBIx::Class::Storage::DBI::Replicated';
+
+use Moose();
+use MooseX::Types();
+diag "Using Moose version $Moose::VERSION and MooseX::Types version $MooseX::Types::VERSION";
+
+=head1 HOW TO USE
+
+ This is a test of the replicated storage system. This will work in one of
+ two ways, either it was try to fake replication with a couple of SQLite DBs
+ and creative use of copy, or if you define a couple of %ENV vars correctly
+ will try to test those. If you do that, it will assume the setup is properly
+ replicating. Your results may vary, but I have demonstrated this to work with
+ mysql native replication.
+
+=cut
+
+
+## ----------------------------------------------------------------------------
+## Build a class to hold all our required testing data and methods.
+## ----------------------------------------------------------------------------
+
+TESTSCHEMACLASSES: {
+
+ ## --------------------------------------------------------------------- ##
+ ## Create an object to contain your replicated stuff.
+ ## --------------------------------------------------------------------- ##
+
+ package DBIx::Class::DBI::Replicated::TestReplication;
+
+ use DBICTest;
+ use base qw/Class::Accessor::Fast/;
+
+ __PACKAGE__->mk_accessors( qw/schema/ );
+
+ ## Initialize the object
+
+ sub new {
+ my ($class, $schema_method) = (shift, shift);
+ my $self = $class->SUPER::new(@_);
+
+ $self->schema( $self->init_schema($schema_method) );
+ return $self;
+ }
+
+ ## Get the Schema and set the replication storage type
+
+ sub init_schema {
+ # current SQLT SQLite producer does not handle DROP TABLE IF EXISTS, trap warnings here
+ local $SIG{__WARN__} = sub { warn @_ unless $_[0] =~ /no such table.+DROP TABLE/s };
+
+ my ($class, $schema_method) = @_;
+
+ my $method = "get_schema_$schema_method";
+ my $schema = $class->$method;
+
+ return $schema;
+ }
+
+ sub get_schema_by_storage_type {
+ DBICTest->init_schema(
+ sqlite_use_file => 1,
+ storage_type=>{
+ '::DBI::Replicated' => {
+ balancer_type=>'::Random',
+ balancer_args=>{
+ auto_validate_every=>100,
+ master_read_weight => 1
+ },
+ }
+ },
+ deploy_args=>{
+ add_drop_table => 1,
+ },
+ );
+ }
+
+ sub get_schema_by_connect_info {
+ DBICTest->init_schema(
+ sqlite_use_file => 1,
+ storage_type=> '::DBI::Replicated',
+ balancer_type=>'::Random',
+ balancer_args=> {
+ auto_validate_every=>100,
+ master_read_weight => 1
+ },
+ deploy_args=>{
+ add_drop_table => 1,
+ },
+ );
+ }
+
+ sub generate_replicant_connect_info {}
+ sub replicate {}
+ sub cleanup {}
+
+ ## --------------------------------------------------------------------- ##
+ ## Add a connect_info option to test option merging.
+ ## --------------------------------------------------------------------- ##
+ {
+ package DBIx::Class::Storage::DBI::Replicated;
+
+ use Moose;
+
+ __PACKAGE__->meta->make_mutable;
+
+ around connect_info => sub {
+ my ($next, $self, $info) = @_;
+ $info->[3]{master_option} = 1;
+ $self->$next($info);
+ };
+
+ __PACKAGE__->meta->make_immutable;
+
+ no Moose;
+ }
+
+ ## --------------------------------------------------------------------- ##
+ ## Subclass for when you are using SQLite for testing, this provides a fake
+ ## replication support.
+ ## --------------------------------------------------------------------- ##
+
+ package DBIx::Class::DBI::Replicated::TestReplication::SQLite;
+
+ use DBICTest;
+ use File::Copy;
+ use base 'DBIx::Class::DBI::Replicated::TestReplication';
+
+ __PACKAGE__->mk_accessors(qw/master_path slave_paths/);
+
+ ## Set the master path from DBICTest
+
+ sub new {
+ my $class = shift @_;
+ my $self = $class->SUPER::new(@_);
+
+ $self->master_path( DBICTest->_sqlite_dbfilename );
+ $self->slave_paths([
+ File::Spec->catfile(qw/t var DBIxClass_slave1.db/),
+ File::Spec->catfile(qw/t var DBIxClass_slave2.db/),
+ ]);
+
+ return $self;
+ }
+
+ ## Return an Array of ArrayRefs where each ArrayRef is suitable to use for
+ ## $storage->connect_info to be used for connecting replicants.
+
+ sub generate_replicant_connect_info {
+ my $self = shift @_;
+ my @dsn = map {
+ "dbi:SQLite:${_}";
+ } @{$self->slave_paths};
+
+ my @connect_infos = map { [$_,'','',{AutoCommit=>1}] } @dsn;
+
+ ## Make sure nothing is left over from a failed test
+ $self->cleanup;
+
+ ## try a hashref too
+ my $c = $connect_infos[0];
+ $connect_infos[0] = {
+ dsn => $c->[0],
+ user => $c->[1],
+ password => $c->[2],
+ %{ $c->[3] }
+ };
+
+ @connect_infos
+ }
+
+ ## Do a 'good enough' replication by copying the master dbfile over each of
+ ## the slave dbfiles. If the master is SQLite we do this, otherwise we
+ ## just do a one second pause to let the slaves catch up.
+
+ sub replicate {
+ my $self = shift @_;
+ foreach my $slave (@{$self->slave_paths}) {
+ copy($self->master_path, $slave);
+ }
+ }
+
+ ## Cleanup after ourselves. Unlink all gthe slave paths.
+
+ sub cleanup {
+ my $self = shift @_;
+ foreach my $slave (@{$self->slave_paths}) {
+ if(-e $slave) {
+ unlink $slave;
+ }
+ }
+ }
+
+ ## --------------------------------------------------------------------- ##
+ ## Subclass for when you are setting the databases via custom export vars
+ ## This is for when you have a replicating database setup that you are
+ ## going to test against. You'll need to define the correct $ENV and have
+ ## two slave databases to test against, as well as a replication system
+ ## that will replicate in less than 1 second.
+ ## --------------------------------------------------------------------- ##
+
+ package DBIx::Class::DBI::Replicated::TestReplication::Custom;
+ use base 'DBIx::Class::DBI::Replicated::TestReplication';
+
+ ## Return an Array of ArrayRefs where each ArrayRef is suitable to use for
+ ## $storage->connect_info to be used for connecting replicants.
+
+ sub generate_replicant_connect_info {
+ return (
+ [$ENV{"DBICTEST_SLAVE0_DSN"}, $ENV{"DBICTEST_SLAVE0_DBUSER"}, $ENV{"DBICTEST_SLAVE0_DBPASS"}, {AutoCommit => 1}],
+ [$ENV{"DBICTEST_SLAVE1_DSN"}, $ENV{"DBICTEST_SLAVE1_DBUSER"}, $ENV{"DBICTEST_SLAVE1_DBPASS"}, {AutoCommit => 1}],
+ );
+ }
+
+ ## pause a bit to let the replication catch up
+
+ sub replicate {
+ sleep 1;
+ }
+}
+
+## ----------------------------------------------------------------------------
+## Create an object and run some tests
+## ----------------------------------------------------------------------------
+
+## Thi first bunch of tests are basic, just make sure all the bits are behaving
+
+my $replicated_class = DBICTest->has_custom_dsn ?
+ 'DBIx::Class::DBI::Replicated::TestReplication::Custom' :
+ 'DBIx::Class::DBI::Replicated::TestReplication::SQLite';
+
+my $replicated;
+
+for my $method (qw/by_connect_info by_storage_type/) {
+ undef $replicated;
+ ok $replicated = $replicated_class->new($method)
+ => "Created a replication object $method";
+
+ isa_ok $replicated->schema
+ => 'DBIx::Class::Schema';
+
+ isa_ok $replicated->schema->storage
+ => 'DBIx::Class::Storage::DBI::Replicated';
+
+ isa_ok $replicated->schema->storage->balancer
+ => 'DBIx::Class::Storage::DBI::Replicated::Balancer::Random'
+ => 'configured balancer_type';
+}
+
+### check that all Storage::DBI methods are handled by ::Replicated
+{
+ my @storage_dbi_methods = Class::MOP::Class
+ ->initialize('DBIx::Class::Storage::DBI')->get_all_method_names;
+
+ my @replicated_methods = DBIx::Class::Storage::DBI::Replicated->meta
+ ->get_all_method_names;
+
+# remove constants and OTHER_CRAP
+ @storage_dbi_methods = grep !/^[A-Z_]+\z/, @storage_dbi_methods;
+
+# remove CAG accessors
+ @storage_dbi_methods = grep !/_accessor\z/, @storage_dbi_methods;
+
+# remove DBIx::Class (the root parent, with CAG and stuff) methods
+ my @root_methods = Class::MOP::Class->initialize('DBIx::Class')
+ ->get_all_method_names;
+ my %count;
+ $count{$_}++ for (@storage_dbi_methods, @root_methods);
+
+ @storage_dbi_methods = grep $count{$_} != 2, @storage_dbi_methods;
+
+# make hashes
+ my %storage_dbi_methods;
+ @storage_dbi_methods{@storage_dbi_methods} = ();
+ my %replicated_methods;
+ @replicated_methods{@replicated_methods} = ();
+
+# remove ::Replicated-specific methods
+ for my $method (@replicated_methods) {
+ delete $replicated_methods{$method}
+ unless exists $storage_dbi_methods{$method};
+ }
+ @replicated_methods = keys %replicated_methods;
+
+# check that what's left is implemented
+ %count = ();
+ $count{$_}++ for (@storage_dbi_methods, @replicated_methods);
+
+ if ((grep $count{$_} == 2, @storage_dbi_methods) == @storage_dbi_methods) {
+ pass 'all DBIx::Class::Storage::DBI methods implemented';
+ }
+ else {
+ my @unimplemented = grep $count{$_} == 1, @storage_dbi_methods;
+
+ fail 'the following DBIx::Class::Storage::DBI methods are unimplemented: '
+ . "@unimplemented";
+ }
+}
+
+ok $replicated->schema->storage->meta
+ => 'has a meta object';
+
+isa_ok $replicated->schema->storage->master
+ => 'DBIx::Class::Storage::DBI';
+
+isa_ok $replicated->schema->storage->pool
+ => 'DBIx::Class::Storage::DBI::Replicated::Pool';
+
+does_ok $replicated->schema->storage->balancer
+ => 'DBIx::Class::Storage::DBI::Replicated::Balancer';
+
+ok my @replicant_connects = $replicated->generate_replicant_connect_info
+ => 'got replication connect information';
+
+ok my @replicated_storages = $replicated->schema->storage->connect_replicants(@replicant_connects)
+ => 'Created some storages suitable for replicants';
+
+our %debug;
+$replicated->schema->storage->debug(1);
+$replicated->schema->storage->debugcb(sub {
+ my ($op, $info) = @_;
+ ##warn "\n$op, $info\n";
+ %debug = (
+ op => $op,
+ info => $info,
+ dsn => ($info=~m/\[(.+)\]/)[0],
+ storage_type => $info=~m/REPLICANT/ ? 'REPLICANT' : 'MASTER',
+ );
+});
+
+ok my @all_storages = $replicated->schema->storage->all_storages
+ => '->all_storages';
+
+is scalar @all_storages,
+ 3
+ => 'correct number of ->all_storages';
+
+is ((grep $_->isa('DBIx::Class::Storage::DBI'), @all_storages),
+ 3
+ => '->all_storages are correct type');
+
+my @all_storage_opts =
+ grep { (reftype($_)||'') eq 'HASH' }
+ map @{ $_->_connect_info }, @all_storages;
+
+is ((grep $_->{master_option}, @all_storage_opts),
+ 3
+ => 'connect_info was merged from master to replicants');
+
+my @replicant_names = keys %{ $replicated->schema->storage->replicants };
+
+ok @replicant_names, "found replicant names @replicant_names";
+
+## Silence warning about not supporting the is_replicating method if using the
+## sqlite dbs.
+$replicated->schema->storage->debugobj->silence(1)
+ if first { m{^t/} } @replicant_names;
+
+isa_ok $replicated->schema->storage->balancer->current_replicant
+ => 'DBIx::Class::Storage::DBI';
+
+$replicated->schema->storage->debugobj->silence(0);
+
+ok $replicated->schema->storage->pool->has_replicants
+ => 'does have replicants';
+
+is $replicated->schema->storage->pool->num_replicants => 2
+ => 'has two replicants';
+
+does_ok $replicated_storages[0]
+ => 'DBIx::Class::Storage::DBI::Replicated::Replicant';
+
+does_ok $replicated_storages[1]
+ => 'DBIx::Class::Storage::DBI::Replicated::Replicant';
+
+does_ok $replicated->schema->storage->replicants->{$replicant_names[0]}
+ => 'DBIx::Class::Storage::DBI::Replicated::Replicant';
+
+does_ok $replicated->schema->storage->replicants->{$replicant_names[1]}
+ => 'DBIx::Class::Storage::DBI::Replicated::Replicant';
+
+## Add some info to the database
+
+$replicated
+ ->schema
+ ->populate('Artist', [
+ [ qw/artistid name/ ],
+ [ 4, "Ozric Tentacles"],
+ ]);
+
+ is $debug{storage_type}, 'MASTER',
+ "got last query from a master: $debug{dsn}";
+
+ like $debug{info}, qr/INSERT/, 'Last was an insert';
+
+## Make sure all the slaves have the table definitions
+
+$replicated->replicate;
+$replicated->schema->storage->replicants->{$replicant_names[0]}->active(1);
+$replicated->schema->storage->replicants->{$replicant_names[1]}->active(1);
+
+## Silence warning about not supporting the is_replicating method if using the
+## sqlite dbs.
+$replicated->schema->storage->debugobj->silence(1)
+ if first { m{^t/} } @replicant_names;
+
+$replicated->schema->storage->pool->validate_replicants;
+
+$replicated->schema->storage->debugobj->silence(0);
+
+## Make sure we can read the data.
+
+ok my $artist1 = $replicated->schema->resultset('Artist')->find(4)
+ => 'Created Result';
+
+## We removed testing here since master read weight is on, so we can't tell in
+## advance what storage to expect. We turn master read weight off a bit lower
+## is $debug{storage_type}, 'REPLICANT'
+## => "got last query from a replicant: $debug{dsn}, $debug{info}";
+
+isa_ok $artist1
+ => 'DBICTest::Artist';
+
+is $artist1->name, 'Ozric Tentacles'
+ => 'Found expected name for first result';
+
+## Check that master_read_weight is honored
+{
+ no warnings qw/once redefine/;
+
+ local
+ *DBIx::Class::Storage::DBI::Replicated::Balancer::Random::_random_number =
+ sub { 999 };
+
+ $replicated->schema->storage->balancer->increment_storage;
+
+ is $replicated->schema->storage->balancer->current_replicant,
+ $replicated->schema->storage->master
+ => 'master_read_weight is honored';
+
+ ## turn it off for the duration of the test
+ $replicated->schema->storage->balancer->master_read_weight(0);
+ $replicated->schema->storage->balancer->increment_storage;
+}
+
+## Add some new rows that only the master will have This is because
+## we overload any type of write operation so that is must hit the master
+## database.
+
+$replicated
+ ->schema
+ ->populate('Artist', [
+ [ qw/artistid name/ ],
+ [ 5, "Doom's Children"],
+ [ 6, "Dead On Arrival"],
+ [ 7, "Watergate"],
+ ]);
+
+ is $debug{storage_type}, 'MASTER',
+ "got last query from a master: $debug{dsn}";
+
+ like $debug{info}, qr/INSERT/, 'Last was an insert';
+
+## Make sure all the slaves have the table definitions
+$replicated->replicate;
+
+## Should find some data now
+
+ok my $artist2 = $replicated->schema->resultset('Artist')->find(5)
+ => 'Sync succeed';
+
+is $debug{storage_type}, 'REPLICANT'
+ => "got last query from a replicant: $debug{dsn}";
+
+isa_ok $artist2
+ => 'DBICTest::Artist';
+
+is $artist2->name, "Doom's Children"
+ => 'Found expected name for first result';
+
+## What happens when we disconnect all the replicants?
+
+is $replicated->schema->storage->pool->connected_replicants => 2
+ => "both replicants are connected";
+
+$replicated->schema->storage->replicants->{$replicant_names[0]}->disconnect;
+$replicated->schema->storage->replicants->{$replicant_names[1]}->disconnect;
+
+is $replicated->schema->storage->pool->connected_replicants => 0
+ => "both replicants are now disconnected";
+
+## All these should pass, since the database should automatically reconnect
+
+ok my $artist3 = $replicated->schema->resultset('Artist')->find(6)
+ => 'Still finding stuff.';
+
+is $debug{storage_type}, 'REPLICANT'
+ => "got last query from a replicant: $debug{dsn}";
+
+isa_ok $artist3
+ => 'DBICTest::Artist';
+
+is $artist3->name, "Dead On Arrival"
+ => 'Found expected name for first result';
+
+is $replicated->schema->storage->pool->connected_replicants => 1
+ => "At Least One replicant reconnected to handle the job";
+
+## What happens when we try to select something that doesn't exist?
+
+ok ! $replicated->schema->resultset('Artist')->find(666)
+ => 'Correctly failed to find something.';
+
+is $debug{storage_type}, 'REPLICANT'
+ => "got last query from a replicant: $debug{dsn}";
+
+## test the reliable option
+
+TESTRELIABLE: {
+
+ $replicated->schema->storage->set_reliable_storage;
+
+ ok $replicated->schema->resultset('Artist')->find(2)
+ => 'Read from master 1';
+
+ is $debug{storage_type}, 'MASTER',
+ "got last query from a master: $debug{dsn}";
+
+ ok $replicated->schema->resultset('Artist')->find(5)
+ => 'Read from master 2';
+
+ is $debug{storage_type}, 'MASTER',
+ "got last query from a master: $debug{dsn}";
+
+ $replicated->schema->storage->set_balanced_storage;
+
+ ok $replicated->schema->resultset('Artist')->find(3)
+ => 'Read from replicant';
+
+ is $debug{storage_type}, 'REPLICANT',
+ "got last query from a replicant: $debug{dsn}";
+}
+
+## Make sure when reliable goes out of scope, we are using replicants again
+
+ok $replicated->schema->resultset('Artist')->find(1)
+ => 'back to replicant 1.';
+
+ is $debug{storage_type}, 'REPLICANT',
+ "got last query from a replicant: $debug{dsn}";
+
+ok $replicated->schema->resultset('Artist')->find(2)
+ => 'back to replicant 2.';
+
+ is $debug{storage_type}, 'REPLICANT',
+ "got last query from a replicant: $debug{dsn}";
+
+## set all the replicants to inactive, and make sure the balancer falls back to
+## the master.
+
+$replicated->schema->storage->replicants->{$replicant_names[0]}->active(0);
+$replicated->schema->storage->replicants->{$replicant_names[1]}->active(0);
+
+{
+ ## catch the fallback to master warning
+ open my $debugfh, '>', \my $fallback_warning;
+ my $oldfh = $replicated->schema->storage->debugfh;
+ $replicated->schema->storage->debugfh($debugfh);
+
+ ok $replicated->schema->resultset('Artist')->find(2)
+ => 'Fallback to master';
+
+ is $debug{storage_type}, 'MASTER',
+ "got last query from a master: $debug{dsn}";
+
+ like $fallback_warning, qr/falling back to master/
+ => 'emits falling back to master warning';
+
+ $replicated->schema->storage->debugfh($oldfh);
+}
+
+$replicated->schema->storage->replicants->{$replicant_names[0]}->active(1);
+$replicated->schema->storage->replicants->{$replicant_names[1]}->active(1);
+
+## Silence warning about not supporting the is_replicating method if using the
+## sqlite dbs.
+$replicated->schema->storage->debugobj->silence(1)
+ if first { m{^t/} } @replicant_names;
+
+$replicated->schema->storage->pool->validate_replicants;
+
+$replicated->schema->storage->debugobj->silence(0);
+
+ok $replicated->schema->resultset('Artist')->find(2)
+ => 'Returned to replicates';
+
+is $debug{storage_type}, 'REPLICANT',
+ "got last query from a replicant: $debug{dsn}";
+
+## Getting slave status tests
+
+SKIP: {
+ ## We skip this tests unless you have a custom replicants, since the default
+ ## sqlite based replication tests don't support these functions.
+
+ skip 'Cannot Test Replicant Status on Non Replicating Database', 10
+ unless DBICTest->has_custom_dsn && $ENV{"DBICTEST_SLAVE0_DSN"};
+
+ $replicated->replicate; ## Give the slaves a chance to catchup.
+
+ ok $replicated->schema->storage->replicants->{$replicant_names[0]}->is_replicating
+ => 'Replicants are replicating';
+
+ is $replicated->schema->storage->replicants->{$replicant_names[0]}->lag_behind_master, 0
+ => 'Replicant is zero seconds behind master';
+
+ ## Test the validate replicants
+
+ $replicated->schema->storage->pool->validate_replicants;
+
+ is $replicated->schema->storage->pool->active_replicants, 2
+ => 'Still have 2 replicants after validation';
+
+ ## Force the replicants to fail the validate test by required their lag to
+ ## be negative (ie ahead of the master!)
+
+ $replicated->schema->storage->pool->maximum_lag(-10);
+ $replicated->schema->storage->pool->validate_replicants;
+
+ is $replicated->schema->storage->pool->active_replicants, 0
+ => 'No way a replicant be be ahead of the master';
+
+ ## Let's be fair to the replicants again. Let them lag up to 5
+
+ $replicated->schema->storage->pool->maximum_lag(5);
+ $replicated->schema->storage->pool->validate_replicants;
+
+ is $replicated->schema->storage->pool->active_replicants, 2
+ => 'Both replicants in good standing again';
+
+ ## Check auto validate
+
+ is $replicated->schema->storage->balancer->auto_validate_every, 100
+ => "Got the expected value for auto validate";
+
+ ## This will make sure we auto validatge everytime
+ $replicated->schema->storage->balancer->auto_validate_every(0);
+
+ ## set all the replicants to inactive, and make sure the balancer falls back to
+ ## the master.
+
+ $replicated->schema->storage->replicants->{$replicant_names[0]}->active(0);
+ $replicated->schema->storage->replicants->{$replicant_names[1]}->active(0);
+
+ ## Ok, now when we go to run a query, autovalidate SHOULD reconnect
+
+ is $replicated->schema->storage->pool->active_replicants => 0
+ => "both replicants turned off";
+
+ ok $replicated->schema->resultset('Artist')->find(5)
+ => 'replicant reactivated';
+
+ is $debug{storage_type}, 'REPLICANT',
+ "got last query from a replicant: $debug{dsn}";
+
+ is $replicated->schema->storage->pool->active_replicants => 2
+ => "both replicants reactivated";
+}
+
+## Test the reliably callback
+
+ok my $reliably = sub {
+
+ ok $replicated->schema->resultset('Artist')->find(5)
+ => 'replicant reactivated';
+
+ is $debug{storage_type}, 'MASTER',
+ "got last query from a master: $debug{dsn}";
+
+} => 'created coderef properly';
+
+$replicated->schema->storage->execute_reliably($reliably);
+
+## Try something with an error
+
+ok my $unreliably = sub {
+
+ ok $replicated->schema->resultset('ArtistXX')->find(5)
+ => 'replicant reactivated';
+
+} => 'created coderef properly';
+
+throws_ok {$replicated->schema->storage->execute_reliably($unreliably)}
+ qr/Can't find source for ArtistXX/
+ => 'Bad coderef throws proper error';
+
+## Make sure replication came back
+
+ok $replicated->schema->resultset('Artist')->find(3)
+ => 'replicant reactivated';
+
+is $debug{storage_type}, 'REPLICANT', "got last query from a replicant: $debug{dsn}";
+
+## make sure transactions are set to execute_reliably
+
+ok my $transaction = sub {
+
+ my $id = shift @_;
+
+ $replicated
+ ->schema
+ ->populate('Artist', [
+ [ qw/artistid name/ ],
+ [ $id, "Children of the Grave"],
+ ]);
+
+ ok my $result = $replicated->schema->resultset('Artist')->find($id)
+ => "Found expected artist for $id";
+
+ is $debug{storage_type}, 'MASTER',
+ "got last query from a master: $debug{dsn}";
+
+ ok my $more = $replicated->schema->resultset('Artist')->find(1)
+ => 'Found expected artist again for 1';
+
+ is $debug{storage_type}, 'MASTER',
+ "got last query from a master: $debug{dsn}";
+
+ return ($result, $more);
+
+} => 'Created a coderef properly';
+
+## Test the transaction with multi return
+{
+ ok my @return = $replicated->schema->txn_do($transaction, 666)
+ => 'did transaction';
+
+ is $return[0]->id, 666
+ => 'first returned value is correct';
+
+ is $debug{storage_type}, 'MASTER',
+ "got last query from a master: $debug{dsn}";
+
+ is $return[1]->id, 1
+ => 'second returned value is correct';
+
+ is $debug{storage_type}, 'MASTER',
+ "got last query from a master: $debug{dsn}";
+
+}
+
+## Test that asking for single return works
+{
+ ok my @return = $replicated->schema->txn_do($transaction, 777)
+ => 'did transaction';
+
+ is $return[0]->id, 777
+ => 'first returned value is correct';
+
+ is $return[1]->id, 1
+ => 'second returned value is correct';
+}
+
+## Test transaction returning a single value
+
+{
+ ok my $result = $replicated->schema->txn_do(sub {
+ ok my $more = $replicated->schema->resultset('Artist')->find(1)
+ => 'found inside a transaction';
+ is $debug{storage_type}, 'MASTER', "got last query from a master: $debug{dsn}";
+ return $more;
+ }) => 'successfully processed transaction';
+
+ is $result->id, 1
+ => 'Got expected single result from transaction';
+}
+
+## Make sure replication came back
+
+ok $replicated->schema->resultset('Artist')->find(1)
+ => 'replicant reactivated';
+
+is $debug{storage_type}, 'REPLICANT', "got last query from a replicant: $debug{dsn}";
+
+## Test Discard changes
+
+{
+ ok my $artist = $replicated->schema->resultset('Artist')->find(2)
+ => 'got an artist to test discard changes';
+
+ is $debug{storage_type}, 'REPLICANT', "got last query from a replicant: $debug{dsn}";
+
+ ok $artist->get_from_storage({force_pool=>'master'})
+ => 'properly discard changes';
+
+ is $debug{storage_type}, 'MASTER', "got last query from a master: $debug{dsn}";
+
+ ok $artist->discard_changes({force_pool=>'master'})
+ => 'properly called discard_changes against master (manual attrs)';
+
+ is $debug{storage_type}, 'MASTER', "got last query from a master: $debug{dsn}";
+
+ ok $artist->discard_changes()
+ => 'properly called discard_changes against master (default attrs)';
+
+ is $debug{storage_type}, 'MASTER', "got last query from a master: $debug{dsn}";
+
+ ok $artist->discard_changes({force_pool=>$replicant_names[0]})
+ => 'properly able to override the default attributes';
+
+ is $debug{storage_type}, 'REPLICANT', "got last query from a replicant: $debug{dsn}"
+}
+
+## Test some edge cases, like trying to do a transaction inside a transaction, etc
+
+{
+ ok my $result = $replicated->schema->txn_do(sub {
+ return $replicated->schema->txn_do(sub {
+ ok my $more = $replicated->schema->resultset('Artist')->find(1)
+ => 'found inside a transaction inside a transaction';
+ is $debug{storage_type}, 'MASTER', "got last query from a master: $debug{dsn}";
+ return $more;
+ });
+ }) => 'successfully processed transaction';
+
+ is $result->id, 1
+ => 'Got expected single result from transaction';
+}
+
+{
+ ok my $result = $replicated->schema->txn_do(sub {
+ return $replicated->schema->storage->execute_reliably(sub {
+ return $replicated->schema->txn_do(sub {
+ return $replicated->schema->storage->execute_reliably(sub {
+ ok my $more = $replicated->schema->resultset('Artist')->find(1)
+ => 'found inside crazy deep transactions and execute_reliably';
+ is $debug{storage_type}, 'MASTER', "got last query from a master: $debug{dsn}";
+ return $more;
+ });
+ });
+ });
+ }) => 'successfully processed transaction';
+
+ is $result->id, 1
+ => 'Got expected single result from transaction';
+}
+
+## Test the force_pool resultset attribute.
+
+{
+ ok my $artist_rs = $replicated->schema->resultset('Artist')
+ => 'got artist resultset';
+
+ ## Turn on Forced Pool Storage
+ ok my $reliable_artist_rs = $artist_rs->search(undef, {force_pool=>'master'})
+ => 'Created a resultset using force_pool storage';
+
+ ok my $artist = $reliable_artist_rs->find(2)
+ => 'got an artist result via force_pool storage';
+
+ is $debug{storage_type}, 'MASTER', "got last query from a master: $debug{dsn}";
+}
+
+## Test the force_pool resultset attribute part two.
+
+{
+ ok my $artist_rs = $replicated->schema->resultset('Artist')
+ => 'got artist resultset';
+
+ ## Turn on Forced Pool Storage
+ ok my $reliable_artist_rs = $artist_rs->search(undef, {force_pool=>$replicant_names[0]})
+ => 'Created a resultset using force_pool storage';
+
+ ok my $artist = $reliable_artist_rs->find(2)
+ => 'got an artist result via force_pool storage';
+
+ is $debug{storage_type}, 'REPLICANT', "got last query from a replicant: $debug{dsn}";
+}
+## Delete the old database files
+$replicated->cleanup;
+
+done_testing;
+
+# vim: sw=4 sts=4 :
Deleted: DBIx-Class/0.08/branches/prefetch/t/storage/replication.t
===================================================================
--- DBIx-Class/0.08/branches/prefetch/t/storage/replication.t 2010-02-14 10:41:14 UTC (rev 8703)
+++ DBIx-Class/0.08/branches/prefetch/t/storage/replication.t 2010-02-14 10:46:07 UTC (rev 8704)
@@ -1,853 +0,0 @@
-use strict;
-use warnings;
-use lib qw(t/lib);
-use Test::More;
-use Test::Exception;
-use DBICTest;
-use List::Util 'first';
-use Scalar::Util 'reftype';
-use File::Spec;
-use IO::Handle;
-
-BEGIN {
- eval "use DBIx::Class::Storage::DBI::Replicated; use Test::Moose";
- plan skip_all => "Deps not installed: $@" if $@;
-}
-
-use_ok 'DBIx::Class::Storage::DBI::Replicated::Pool';
-use_ok 'DBIx::Class::Storage::DBI::Replicated::Balancer';
-use_ok 'DBIx::Class::Storage::DBI::Replicated::Replicant';
-use_ok 'DBIx::Class::Storage::DBI::Replicated';
-
-use Moose();
-use MooseX::Types();
-diag "Using Moose version $Moose::VERSION and MooseX::Types version $MooseX::Types::VERSION";
-
-=head1 HOW TO USE
-
- This is a test of the replicated storage system. This will work in one of
- two ways, either it was try to fake replication with a couple of SQLite DBs
- and creative use of copy, or if you define a couple of %ENV vars correctly
- will try to test those. If you do that, it will assume the setup is properly
- replicating. Your results may vary, but I have demonstrated this to work with
- mysql native replication.
-
-=cut
-
-
-## ----------------------------------------------------------------------------
-## Build a class to hold all our required testing data and methods.
-## ----------------------------------------------------------------------------
-
-TESTSCHEMACLASSES: {
-
- ## --------------------------------------------------------------------- ##
- ## Create an object to contain your replicated stuff.
- ## --------------------------------------------------------------------- ##
-
- package DBIx::Class::DBI::Replicated::TestReplication;
-
- use DBICTest;
- use base qw/Class::Accessor::Fast/;
-
- __PACKAGE__->mk_accessors( qw/schema/ );
-
- ## Initialize the object
-
- sub new {
- my ($class, $schema_method) = (shift, shift);
- my $self = $class->SUPER::new(@_);
-
- $self->schema( $self->init_schema($schema_method) );
- return $self;
- }
-
- ## Get the Schema and set the replication storage type
-
- sub init_schema {
- # current SQLT SQLite producer does not handle DROP TABLE IF EXISTS, trap warnings here
- local $SIG{__WARN__} = sub { warn @_ unless $_[0] =~ /no such table.+DROP TABLE/s };
-
- my ($class, $schema_method) = @_;
-
- my $method = "get_schema_$schema_method";
- my $schema = $class->$method;
-
- return $schema;
- }
-
- sub get_schema_by_storage_type {
- DBICTest->init_schema(
- sqlite_use_file => 1,
- storage_type=>{
- '::DBI::Replicated' => {
- balancer_type=>'::Random',
- balancer_args=>{
- auto_validate_every=>100,
- master_read_weight => 1
- },
- }
- },
- deploy_args=>{
- add_drop_table => 1,
- },
- );
- }
-
- sub get_schema_by_connect_info {
- DBICTest->init_schema(
- sqlite_use_file => 1,
- storage_type=> '::DBI::Replicated',
- balancer_type=>'::Random',
- balancer_args=> {
- auto_validate_every=>100,
- master_read_weight => 1
- },
- deploy_args=>{
- add_drop_table => 1,
- },
- );
- }
-
- sub generate_replicant_connect_info {}
- sub replicate {}
- sub cleanup {}
-
- ## --------------------------------------------------------------------- ##
- ## Add a connect_info option to test option merging.
- ## --------------------------------------------------------------------- ##
- {
- package DBIx::Class::Storage::DBI::Replicated;
-
- use Moose;
-
- __PACKAGE__->meta->make_mutable;
-
- around connect_info => sub {
- my ($next, $self, $info) = @_;
- $info->[3]{master_option} = 1;
- $self->$next($info);
- };
-
- __PACKAGE__->meta->make_immutable;
-
- no Moose;
- }
-
- ## --------------------------------------------------------------------- ##
- ## Subclass for when you are using SQLite for testing, this provides a fake
- ## replication support.
- ## --------------------------------------------------------------------- ##
-
- package DBIx::Class::DBI::Replicated::TestReplication::SQLite;
-
- use DBICTest;
- use File::Copy;
- use base 'DBIx::Class::DBI::Replicated::TestReplication';
-
- __PACKAGE__->mk_accessors(qw/master_path slave_paths/);
-
- ## Set the master path from DBICTest
-
- sub new {
- my $class = shift @_;
- my $self = $class->SUPER::new(@_);
-
- $self->master_path( DBICTest->_sqlite_dbfilename );
- $self->slave_paths([
- File::Spec->catfile(qw/t var DBIxClass_slave1.db/),
- File::Spec->catfile(qw/t var DBIxClass_slave2.db/),
- ]);
-
- return $self;
- }
-
- ## Return an Array of ArrayRefs where each ArrayRef is suitable to use for
- ## $storage->connect_info to be used for connecting replicants.
-
- sub generate_replicant_connect_info {
- my $self = shift @_;
- my @dsn = map {
- "dbi:SQLite:${_}";
- } @{$self->slave_paths};
-
- my @connect_infos = map { [$_,'','',{AutoCommit=>1}] } @dsn;
-
- ## Make sure nothing is left over from a failed test
- $self->cleanup;
-
- ## try a hashref too
- my $c = $connect_infos[0];
- $connect_infos[0] = {
- dsn => $c->[0],
- user => $c->[1],
- password => $c->[2],
- %{ $c->[3] }
- };
-
- @connect_infos
- }
-
- ## Do a 'good enough' replication by copying the master dbfile over each of
- ## the slave dbfiles. If the master is SQLite we do this, otherwise we
- ## just do a one second pause to let the slaves catch up.
-
- sub replicate {
- my $self = shift @_;
- foreach my $slave (@{$self->slave_paths}) {
- copy($self->master_path, $slave);
- }
- }
-
- ## Cleanup after ourselves. Unlink all gthe slave paths.
-
- sub cleanup {
- my $self = shift @_;
- foreach my $slave (@{$self->slave_paths}) {
- if(-e $slave) {
- unlink $slave;
- }
- }
- }
-
- ## --------------------------------------------------------------------- ##
- ## Subclass for when you are setting the databases via custom export vars
- ## This is for when you have a replicating database setup that you are
- ## going to test against. You'll need to define the correct $ENV and have
- ## two slave databases to test against, as well as a replication system
- ## that will replicate in less than 1 second.
- ## --------------------------------------------------------------------- ##
-
- package DBIx::Class::DBI::Replicated::TestReplication::Custom;
- use base 'DBIx::Class::DBI::Replicated::TestReplication';
-
- ## Return an Array of ArrayRefs where each ArrayRef is suitable to use for
- ## $storage->connect_info to be used for connecting replicants.
-
- sub generate_replicant_connect_info {
- return (
- [$ENV{"DBICTEST_SLAVE0_DSN"}, $ENV{"DBICTEST_SLAVE0_DBUSER"}, $ENV{"DBICTEST_SLAVE0_DBPASS"}, {AutoCommit => 1}],
- [$ENV{"DBICTEST_SLAVE1_DSN"}, $ENV{"DBICTEST_SLAVE1_DBUSER"}, $ENV{"DBICTEST_SLAVE1_DBPASS"}, {AutoCommit => 1}],
- );
- }
-
- ## pause a bit to let the replication catch up
-
- sub replicate {
- sleep 1;
- }
-}
-
-## ----------------------------------------------------------------------------
-## Create an object and run some tests
-## ----------------------------------------------------------------------------
-
-## Thi first bunch of tests are basic, just make sure all the bits are behaving
-
-my $replicated_class = DBICTest->has_custom_dsn ?
- 'DBIx::Class::DBI::Replicated::TestReplication::Custom' :
- 'DBIx::Class::DBI::Replicated::TestReplication::SQLite';
-
-my $replicated;
-
-for my $method (qw/by_connect_info by_storage_type/) {
- undef $replicated;
- ok $replicated = $replicated_class->new($method)
- => "Created a replication object $method";
-
- isa_ok $replicated->schema
- => 'DBIx::Class::Schema';
-
- isa_ok $replicated->schema->storage
- => 'DBIx::Class::Storage::DBI::Replicated';
-
- isa_ok $replicated->schema->storage->balancer
- => 'DBIx::Class::Storage::DBI::Replicated::Balancer::Random'
- => 'configured balancer_type';
-}
-
-ok $replicated->schema->storage->meta
- => 'has a meta object';
-
-isa_ok $replicated->schema->storage->master
- => 'DBIx::Class::Storage::DBI';
-
-isa_ok $replicated->schema->storage->pool
- => 'DBIx::Class::Storage::DBI::Replicated::Pool';
-
-does_ok $replicated->schema->storage->balancer
- => 'DBIx::Class::Storage::DBI::Replicated::Balancer';
-
-ok my @replicant_connects = $replicated->generate_replicant_connect_info
- => 'got replication connect information';
-
-ok my @replicated_storages = $replicated->schema->storage->connect_replicants(@replicant_connects)
- => 'Created some storages suitable for replicants';
-
-our %debug;
-$replicated->schema->storage->debug(1);
-$replicated->schema->storage->debugcb(sub {
- my ($op, $info) = @_;
- ##warn "\n$op, $info\n";
- %debug = (
- op => $op,
- info => $info,
- dsn => ($info=~m/\[(.+)\]/)[0],
- storage_type => $info=~m/REPLICANT/ ? 'REPLICANT' : 'MASTER',
- );
-});
-
-ok my @all_storages = $replicated->schema->storage->all_storages
- => '->all_storages';
-
-is scalar @all_storages,
- 3
- => 'correct number of ->all_storages';
-
-is ((grep $_->isa('DBIx::Class::Storage::DBI'), @all_storages),
- 3
- => '->all_storages are correct type');
-
-my @all_storage_opts =
- grep { (reftype($_)||'') eq 'HASH' }
- map @{ $_->_connect_info }, @all_storages;
-
-is ((grep $_->{master_option}, @all_storage_opts),
- 3
- => 'connect_info was merged from master to replicants');
-
-my @replicant_names = keys %{ $replicated->schema->storage->replicants };
-
-ok @replicant_names, "found replicant names @replicant_names";
-
-## Silence warning about not supporting the is_replicating method if using the
-## sqlite dbs.
-$replicated->schema->storage->debugobj->silence(1)
- if first { m{^t/} } @replicant_names;
-
-isa_ok $replicated->schema->storage->balancer->current_replicant
- => 'DBIx::Class::Storage::DBI';
-
-$replicated->schema->storage->debugobj->silence(0);
-
-ok $replicated->schema->storage->pool->has_replicants
- => 'does have replicants';
-
-is $replicated->schema->storage->pool->num_replicants => 2
- => 'has two replicants';
-
-does_ok $replicated_storages[0]
- => 'DBIx::Class::Storage::DBI::Replicated::Replicant';
-
-does_ok $replicated_storages[1]
- => 'DBIx::Class::Storage::DBI::Replicated::Replicant';
-
-does_ok $replicated->schema->storage->replicants->{$replicant_names[0]}
- => 'DBIx::Class::Storage::DBI::Replicated::Replicant';
-
-does_ok $replicated->schema->storage->replicants->{$replicant_names[1]}
- => 'DBIx::Class::Storage::DBI::Replicated::Replicant';
-
-## Add some info to the database
-
-$replicated
- ->schema
- ->populate('Artist', [
- [ qw/artistid name/ ],
- [ 4, "Ozric Tentacles"],
- ]);
-
- is $debug{storage_type}, 'MASTER',
- "got last query from a master: $debug{dsn}";
-
- like $debug{info}, qr/INSERT/, 'Last was an insert';
-
-## Make sure all the slaves have the table definitions
-
-$replicated->replicate;
-$replicated->schema->storage->replicants->{$replicant_names[0]}->active(1);
-$replicated->schema->storage->replicants->{$replicant_names[1]}->active(1);
-
-## Silence warning about not supporting the is_replicating method if using the
-## sqlite dbs.
-$replicated->schema->storage->debugobj->silence(1)
- if first { m{^t/} } @replicant_names;
-
-$replicated->schema->storage->pool->validate_replicants;
-
-$replicated->schema->storage->debugobj->silence(0);
-
-## Make sure we can read the data.
-
-ok my $artist1 = $replicated->schema->resultset('Artist')->find(4)
- => 'Created Result';
-
-## We removed testing here since master read weight is on, so we can't tell in
-## advance what storage to expect. We turn master read weight off a bit lower
-## is $debug{storage_type}, 'REPLICANT'
-## => "got last query from a replicant: $debug{dsn}, $debug{info}";
-
-isa_ok $artist1
- => 'DBICTest::Artist';
-
-is $artist1->name, 'Ozric Tentacles'
- => 'Found expected name for first result';
-
-## Check that master_read_weight is honored
-{
- no warnings qw/once redefine/;
-
- local
- *DBIx::Class::Storage::DBI::Replicated::Balancer::Random::_random_number =
- sub { 999 };
-
- $replicated->schema->storage->balancer->increment_storage;
-
- is $replicated->schema->storage->balancer->current_replicant,
- $replicated->schema->storage->master
- => 'master_read_weight is honored';
-
- ## turn it off for the duration of the test
- $replicated->schema->storage->balancer->master_read_weight(0);
- $replicated->schema->storage->balancer->increment_storage;
-}
-
-## Add some new rows that only the master will have This is because
-## we overload any type of write operation so that is must hit the master
-## database.
-
-$replicated
- ->schema
- ->populate('Artist', [
- [ qw/artistid name/ ],
- [ 5, "Doom's Children"],
- [ 6, "Dead On Arrival"],
- [ 7, "Watergate"],
- ]);
-
- is $debug{storage_type}, 'MASTER',
- "got last query from a master: $debug{dsn}";
-
- like $debug{info}, qr/INSERT/, 'Last was an insert';
-
-## Make sure all the slaves have the table definitions
-$replicated->replicate;
-
-## Should find some data now
-
-ok my $artist2 = $replicated->schema->resultset('Artist')->find(5)
- => 'Sync succeed';
-
-is $debug{storage_type}, 'REPLICANT'
- => "got last query from a replicant: $debug{dsn}";
-
-isa_ok $artist2
- => 'DBICTest::Artist';
-
-is $artist2->name, "Doom's Children"
- => 'Found expected name for first result';
-
-## What happens when we disconnect all the replicants?
-
-is $replicated->schema->storage->pool->connected_replicants => 2
- => "both replicants are connected";
-
-$replicated->schema->storage->replicants->{$replicant_names[0]}->disconnect;
-$replicated->schema->storage->replicants->{$replicant_names[1]}->disconnect;
-
-is $replicated->schema->storage->pool->connected_replicants => 0
- => "both replicants are now disconnected";
-
-## All these should pass, since the database should automatically reconnect
-
-ok my $artist3 = $replicated->schema->resultset('Artist')->find(6)
- => 'Still finding stuff.';
-
-is $debug{storage_type}, 'REPLICANT'
- => "got last query from a replicant: $debug{dsn}";
-
-isa_ok $artist3
- => 'DBICTest::Artist';
-
-is $artist3->name, "Dead On Arrival"
- => 'Found expected name for first result';
-
-is $replicated->schema->storage->pool->connected_replicants => 1
- => "At Least One replicant reconnected to handle the job";
-
-## What happens when we try to select something that doesn't exist?
-
-ok ! $replicated->schema->resultset('Artist')->find(666)
- => 'Correctly failed to find something.';
-
-is $debug{storage_type}, 'REPLICANT'
- => "got last query from a replicant: $debug{dsn}";
-
-## test the reliable option
-
-TESTRELIABLE: {
-
- $replicated->schema->storage->set_reliable_storage;
-
- ok $replicated->schema->resultset('Artist')->find(2)
- => 'Read from master 1';
-
- is $debug{storage_type}, 'MASTER',
- "got last query from a master: $debug{dsn}";
-
- ok $replicated->schema->resultset('Artist')->find(5)
- => 'Read from master 2';
-
- is $debug{storage_type}, 'MASTER',
- "got last query from a master: $debug{dsn}";
-
- $replicated->schema->storage->set_balanced_storage;
-
- ok $replicated->schema->resultset('Artist')->find(3)
- => 'Read from replicant';
-
- is $debug{storage_type}, 'REPLICANT',
- "got last query from a replicant: $debug{dsn}";
-}
-
-## Make sure when reliable goes out of scope, we are using replicants again
-
-ok $replicated->schema->resultset('Artist')->find(1)
- => 'back to replicant 1.';
-
- is $debug{storage_type}, 'REPLICANT',
- "got last query from a replicant: $debug{dsn}";
-
-ok $replicated->schema->resultset('Artist')->find(2)
- => 'back to replicant 2.';
-
- is $debug{storage_type}, 'REPLICANT',
- "got last query from a replicant: $debug{dsn}";
-
-## set all the replicants to inactive, and make sure the balancer falls back to
-## the master.
-
-$replicated->schema->storage->replicants->{$replicant_names[0]}->active(0);
-$replicated->schema->storage->replicants->{$replicant_names[1]}->active(0);
-
-{
- ## catch the fallback to master warning
- open my $debugfh, '>', \my $fallback_warning;
- my $oldfh = $replicated->schema->storage->debugfh;
- $replicated->schema->storage->debugfh($debugfh);
-
- ok $replicated->schema->resultset('Artist')->find(2)
- => 'Fallback to master';
-
- is $debug{storage_type}, 'MASTER',
- "got last query from a master: $debug{dsn}";
-
- like $fallback_warning, qr/falling back to master/
- => 'emits falling back to master warning';
-
- $replicated->schema->storage->debugfh($oldfh);
-}
-
-$replicated->schema->storage->replicants->{$replicant_names[0]}->active(1);
-$replicated->schema->storage->replicants->{$replicant_names[1]}->active(1);
-
-## Silence warning about not supporting the is_replicating method if using the
-## sqlite dbs.
-$replicated->schema->storage->debugobj->silence(1)
- if first { m{^t/} } @replicant_names;
-
-$replicated->schema->storage->pool->validate_replicants;
-
-$replicated->schema->storage->debugobj->silence(0);
-
-ok $replicated->schema->resultset('Artist')->find(2)
- => 'Returned to replicates';
-
-is $debug{storage_type}, 'REPLICANT',
- "got last query from a replicant: $debug{dsn}";
-
-## Getting slave status tests
-
-SKIP: {
- ## We skip this tests unless you have a custom replicants, since the default
- ## sqlite based replication tests don't support these functions.
-
- skip 'Cannot Test Replicant Status on Non Replicating Database', 10
- unless DBICTest->has_custom_dsn && $ENV{"DBICTEST_SLAVE0_DSN"};
-
- $replicated->replicate; ## Give the slaves a chance to catchup.
-
- ok $replicated->schema->storage->replicants->{$replicant_names[0]}->is_replicating
- => 'Replicants are replicating';
-
- is $replicated->schema->storage->replicants->{$replicant_names[0]}->lag_behind_master, 0
- => 'Replicant is zero seconds behind master';
-
- ## Test the validate replicants
-
- $replicated->schema->storage->pool->validate_replicants;
-
- is $replicated->schema->storage->pool->active_replicants, 2
- => 'Still have 2 replicants after validation';
-
- ## Force the replicants to fail the validate test by required their lag to
- ## be negative (ie ahead of the master!)
-
- $replicated->schema->storage->pool->maximum_lag(-10);
- $replicated->schema->storage->pool->validate_replicants;
-
- is $replicated->schema->storage->pool->active_replicants, 0
- => 'No way a replicant be be ahead of the master';
-
- ## Let's be fair to the replicants again. Let them lag up to 5
-
- $replicated->schema->storage->pool->maximum_lag(5);
- $replicated->schema->storage->pool->validate_replicants;
-
- is $replicated->schema->storage->pool->active_replicants, 2
- => 'Both replicants in good standing again';
-
- ## Check auto validate
-
- is $replicated->schema->storage->balancer->auto_validate_every, 100
- => "Got the expected value for auto validate";
-
- ## This will make sure we auto validatge everytime
- $replicated->schema->storage->balancer->auto_validate_every(0);
-
- ## set all the replicants to inactive, and make sure the balancer falls back to
- ## the master.
-
- $replicated->schema->storage->replicants->{$replicant_names[0]}->active(0);
- $replicated->schema->storage->replicants->{$replicant_names[1]}->active(0);
-
- ## Ok, now when we go to run a query, autovalidate SHOULD reconnect
-
- is $replicated->schema->storage->pool->active_replicants => 0
- => "both replicants turned off";
-
- ok $replicated->schema->resultset('Artist')->find(5)
- => 'replicant reactivated';
-
- is $debug{storage_type}, 'REPLICANT',
- "got last query from a replicant: $debug{dsn}";
-
- is $replicated->schema->storage->pool->active_replicants => 2
- => "both replicants reactivated";
-}
-
-## Test the reliably callback
-
-ok my $reliably = sub {
-
- ok $replicated->schema->resultset('Artist')->find(5)
- => 'replicant reactivated';
-
- is $debug{storage_type}, 'MASTER',
- "got last query from a master: $debug{dsn}";
-
-} => 'created coderef properly';
-
-$replicated->schema->storage->execute_reliably($reliably);
-
-## Try something with an error
-
-ok my $unreliably = sub {
-
- ok $replicated->schema->resultset('ArtistXX')->find(5)
- => 'replicant reactivated';
-
-} => 'created coderef properly';
-
-throws_ok {$replicated->schema->storage->execute_reliably($unreliably)}
- qr/Can't find source for ArtistXX/
- => 'Bad coderef throws proper error';
-
-## Make sure replication came back
-
-ok $replicated->schema->resultset('Artist')->find(3)
- => 'replicant reactivated';
-
-is $debug{storage_type}, 'REPLICANT', "got last query from a replicant: $debug{dsn}";
-
-## make sure transactions are set to execute_reliably
-
-ok my $transaction = sub {
-
- my $id = shift @_;
-
- $replicated
- ->schema
- ->populate('Artist', [
- [ qw/artistid name/ ],
- [ $id, "Children of the Grave"],
- ]);
-
- ok my $result = $replicated->schema->resultset('Artist')->find($id)
- => "Found expected artist for $id";
-
- is $debug{storage_type}, 'MASTER',
- "got last query from a master: $debug{dsn}";
-
- ok my $more = $replicated->schema->resultset('Artist')->find(1)
- => 'Found expected artist again for 1';
-
- is $debug{storage_type}, 'MASTER',
- "got last query from a master: $debug{dsn}";
-
- return ($result, $more);
-
-} => 'Created a coderef properly';
-
-## Test the transaction with multi return
-{
- ok my @return = $replicated->schema->txn_do($transaction, 666)
- => 'did transaction';
-
- is $return[0]->id, 666
- => 'first returned value is correct';
-
- is $debug{storage_type}, 'MASTER',
- "got last query from a master: $debug{dsn}";
-
- is $return[1]->id, 1
- => 'second returned value is correct';
-
- is $debug{storage_type}, 'MASTER',
- "got last query from a master: $debug{dsn}";
-
-}
-
-## Test that asking for single return works
-{
- ok my @return = $replicated->schema->txn_do($transaction, 777)
- => 'did transaction';
-
- is $return[0]->id, 777
- => 'first returned value is correct';
-
- is $return[1]->id, 1
- => 'second returned value is correct';
-}
-
-## Test transaction returning a single value
-
-{
- ok my $result = $replicated->schema->txn_do(sub {
- ok my $more = $replicated->schema->resultset('Artist')->find(1)
- => 'found inside a transaction';
- is $debug{storage_type}, 'MASTER', "got last query from a master: $debug{dsn}";
- return $more;
- }) => 'successfully processed transaction';
-
- is $result->id, 1
- => 'Got expected single result from transaction';
-}
-
-## Make sure replication came back
-
-ok $replicated->schema->resultset('Artist')->find(1)
- => 'replicant reactivated';
-
-is $debug{storage_type}, 'REPLICANT', "got last query from a replicant: $debug{dsn}";
-
-## Test Discard changes
-
-{
- ok my $artist = $replicated->schema->resultset('Artist')->find(2)
- => 'got an artist to test discard changes';
-
- is $debug{storage_type}, 'REPLICANT', "got last query from a replicant: $debug{dsn}";
-
- ok $artist->get_from_storage({force_pool=>'master'})
- => 'properly discard changes';
-
- is $debug{storage_type}, 'MASTER', "got last query from a master: $debug{dsn}";
-
- ok $artist->discard_changes({force_pool=>'master'})
- => 'properly called discard_changes against master (manual attrs)';
-
- is $debug{storage_type}, 'MASTER', "got last query from a master: $debug{dsn}";
-
- ok $artist->discard_changes()
- => 'properly called discard_changes against master (default attrs)';
-
- is $debug{storage_type}, 'MASTER', "got last query from a master: $debug{dsn}";
-
- ok $artist->discard_changes({force_pool=>$replicant_names[0]})
- => 'properly able to override the default attributes';
-
- is $debug{storage_type}, 'REPLICANT', "got last query from a replicant: $debug{dsn}"
-}
-
-## Test some edge cases, like trying to do a transaction inside a transaction, etc
-
-{
- ok my $result = $replicated->schema->txn_do(sub {
- return $replicated->schema->txn_do(sub {
- ok my $more = $replicated->schema->resultset('Artist')->find(1)
- => 'found inside a transaction inside a transaction';
- is $debug{storage_type}, 'MASTER', "got last query from a master: $debug{dsn}";
- return $more;
- });
- }) => 'successfully processed transaction';
-
- is $result->id, 1
- => 'Got expected single result from transaction';
-}
-
-{
- ok my $result = $replicated->schema->txn_do(sub {
- return $replicated->schema->storage->execute_reliably(sub {
- return $replicated->schema->txn_do(sub {
- return $replicated->schema->storage->execute_reliably(sub {
- ok my $more = $replicated->schema->resultset('Artist')->find(1)
- => 'found inside crazy deep transactions and execute_reliably';
- is $debug{storage_type}, 'MASTER', "got last query from a master: $debug{dsn}";
- return $more;
- });
- });
- });
- }) => 'successfully processed transaction';
-
- is $result->id, 1
- => 'Got expected single result from transaction';
-}
-
-## Test the force_pool resultset attribute.
-
-{
- ok my $artist_rs = $replicated->schema->resultset('Artist')
- => 'got artist resultset';
-
- ## Turn on Forced Pool Storage
- ok my $reliable_artist_rs = $artist_rs->search(undef, {force_pool=>'master'})
- => 'Created a resultset using force_pool storage';
-
- ok my $artist = $reliable_artist_rs->find(2)
- => 'got an artist result via force_pool storage';
-
- is $debug{storage_type}, 'MASTER', "got last query from a master: $debug{dsn}";
-}
-
-## Test the force_pool resultset attribute part two.
-
-{
- ok my $artist_rs = $replicated->schema->resultset('Artist')
- => 'got artist resultset';
-
- ## Turn on Forced Pool Storage
- ok my $reliable_artist_rs = $artist_rs->search(undef, {force_pool=>$replicant_names[0]})
- => 'Created a resultset using force_pool storage';
-
- ok my $artist = $reliable_artist_rs->find(2)
- => 'got an artist result via force_pool storage';
-
- is $debug{storage_type}, 'REPLICANT', "got last query from a replicant: $debug{dsn}";
-}
-## Delete the old database files
-$replicated->cleanup;
-
-done_testing;
-
-# vim: sw=4 sts=4 :
More information about the Bast-commits
mailing list