[Bast-commits] r6624 - in DBIx-Class/0.08/branches/mc_fixes: .
lib/DBIx lib/DBIx/Class lib/DBIx/Class/InflateColumn
lib/DBIx/Class/SQLAHacks lib/DBIx/Class/Storage
lib/DBIx/Class/Storage/DBI t t/bind t/inflate
t/lib/DBICNSTest/Bogus t/lib/DBICTest t/lib/DBICTest/Schema
t/multi_create t/prefetch t/search t/update
ribasushi at dev.catalyst.perl.org
ribasushi at dev.catalyst.perl.org
Thu Jun 11 14:54:49 GMT 2009
Author: ribasushi
Date: 2009-06-11 14:54:49 +0000 (Thu, 11 Jun 2009)
New Revision: 6624
Added:
DBIx-Class/0.08/branches/mc_fixes/lib/DBIx/Class/SQLAHacks/MySQL.pm
DBIx-Class/0.08/branches/mc_fixes/t/18insert_default.t
DBIx-Class/0.08/branches/mc_fixes/t/inflate/
DBIx-Class/0.08/branches/mc_fixes/t/inflate/core.t
DBIx-Class/0.08/branches/mc_fixes/t/inflate/datetime.t
DBIx-Class/0.08/branches/mc_fixes/t/inflate/datetime_mysql.t
DBIx-Class/0.08/branches/mc_fixes/t/inflate/datetime_pg.t
DBIx-Class/0.08/branches/mc_fixes/t/inflate/file_column.t
DBIx-Class/0.08/branches/mc_fixes/t/inflate/hri.t
DBIx-Class/0.08/branches/mc_fixes/t/inflate/serialize.t
DBIx-Class/0.08/branches/mc_fixes/t/lib/DBICNSTest/Bogus/B.pm
DBIx-Class/0.08/branches/mc_fixes/t/multi_create/insert_defaults.t
DBIx-Class/0.08/branches/mc_fixes/t/search/preserve_original_rs.t
DBIx-Class/0.08/branches/mc_fixes/t/update/
DBIx-Class/0.08/branches/mc_fixes/t/update/type_aware.t
Removed:
DBIx-Class/0.08/branches/mc_fixes/t/68inflate.t
DBIx-Class/0.08/branches/mc_fixes/t/68inflate_resultclass_hashrefinflator.t
DBIx-Class/0.08/branches/mc_fixes/t/68inflate_serialize.t
DBIx-Class/0.08/branches/mc_fixes/t/89inflate_datetime.t
DBIx-Class/0.08/branches/mc_fixes/t/96file_column.t
DBIx-Class/0.08/branches/mc_fixes/t/prefetch/pollute_already_joined.t
Modified:
DBIx-Class/0.08/branches/mc_fixes/
DBIx-Class/0.08/branches/mc_fixes/Changes
DBIx-Class/0.08/branches/mc_fixes/Makefile.PL
DBIx-Class/0.08/branches/mc_fixes/lib/DBIx/Class.pm
DBIx-Class/0.08/branches/mc_fixes/lib/DBIx/Class/InflateColumn/DateTime.pm
DBIx-Class/0.08/branches/mc_fixes/lib/DBIx/Class/ResultSet.pm
DBIx-Class/0.08/branches/mc_fixes/lib/DBIx/Class/ResultSource.pm
DBIx-Class/0.08/branches/mc_fixes/lib/DBIx/Class/Row.pm
DBIx-Class/0.08/branches/mc_fixes/lib/DBIx/Class/SQLAHacks.pm
DBIx-Class/0.08/branches/mc_fixes/lib/DBIx/Class/SQLAHacks/OracleJoins.pm
DBIx-Class/0.08/branches/mc_fixes/lib/DBIx/Class/Storage/DBI.pm
DBIx-Class/0.08/branches/mc_fixes/lib/DBIx/Class/Storage/DBI/Cursor.pm
DBIx-Class/0.08/branches/mc_fixes/lib/DBIx/Class/Storage/DBI/SQLite.pm
DBIx-Class/0.08/branches/mc_fixes/lib/DBIx/Class/Storage/DBI/mysql.pm
DBIx-Class/0.08/branches/mc_fixes/t/03podcoverage.t
DBIx-Class/0.08/branches/mc_fixes/t/36datetime.t
DBIx-Class/0.08/branches/mc_fixes/t/52cycle.t
DBIx-Class/0.08/branches/mc_fixes/t/71mysql.t
DBIx-Class/0.08/branches/mc_fixes/t/86sqlt.t
DBIx-Class/0.08/branches/mc_fixes/t/bind/bindtype_columns.t
DBIx-Class/0.08/branches/mc_fixes/t/lib/DBICTest/AuthorCheck.pm
DBIx-Class/0.08/branches/mc_fixes/t/lib/DBICTest/Schema/Artist.pm
DBIx-Class/0.08/branches/mc_fixes/t/lib/DBICTest/Schema/Bookmark.pm
DBIx-Class/0.08/branches/mc_fixes/t/lib/DBICTest/Schema/Event.pm
DBIx-Class/0.08/branches/mc_fixes/t/lib/DBICTest/Schema/EventTZ.pm
DBIx-Class/0.08/branches/mc_fixes/t/lib/DBICTest/Schema/EventTZDeprecated.pm
DBIx-Class/0.08/branches/mc_fixes/t/lib/DBICTest/Schema/Link.pm
DBIx-Class/0.08/branches/mc_fixes/t/prefetch/rows_bug.t
Log:
r6538 at Thesaurus (orig r6537): ribasushi | 2009-06-07 23:07:55 +0200
Fix for mysql subquery problem
r6539 at Thesaurus (orig r6538): ribasushi | 2009-06-07 23:36:43 +0200
Make empty/default inserts use standard SQL
r6540 at Thesaurus (orig r6539): ribasushi | 2009-06-08 00:59:21 +0200
Add mysql empty insert SQL override
Make SQLAHacks parts loadable at runtime via ensure_class_loaded
r6541 at Thesaurus (orig r6540): ribasushi | 2009-06-08 01:03:04 +0200
Make podcoverage happy
r6542 at Thesaurus (orig r6541): ribasushi | 2009-06-08 01:24:06 +0200
Fix find_or_new/create to stop returning random rows when default value insert is requested
r6543 at Thesaurus (orig r6542): ribasushi | 2009-06-08 11:36:56 +0200
Simply order_by/_virtual_order_by handling
r6553 at Thesaurus (orig r6552): ribasushi | 2009-06-08 23:56:41 +0200
duh
r6557 at Thesaurus (orig r6556): ash | 2009-06-09 12:20:34 +0200
Addjust bug to show problem with rows => 1 + child rel
r6558 at Thesaurus (orig r6557): ribasushi | 2009-06-09 13:12:46 +0200
Require a recent bugfixed Devel::Cycle
r6560 at Thesaurus (orig r6559): ash | 2009-06-09 15:07:30 +0200
Make IC::DT extra warning state the column name too
r6575 at Thesaurus (orig r6574): ribasushi | 2009-06-10 00:19:48 +0200
AuthorCheck fixes
r6579 at Thesaurus (orig r6578): ribasushi | 2009-06-10 00:52:17 +0200
r6522 at Thesaurus (orig r6521): ribasushi | 2009-06-05 19:27:55 +0200
New branch to try resultsource related stuff
r6545 at Thesaurus (orig r6544): ribasushi | 2009-06-08 13:00:54 +0200
First stab at adding resultsources to each join in select - works won-der-ful-ly
r6546 at Thesaurus (orig r6545): ribasushi | 2009-06-08 13:14:08 +0200
Commit failing test and thoughts on search arg deflation
r6576 at Thesaurus (orig r6575): ribasushi | 2009-06-10 00:31:55 +0200
Todoify DT in search deflation test until after 0.09
r6577 at Thesaurus (orig r6576): ribasushi | 2009-06-10 00:48:07 +0200
Factor out the $ident resolver
r6581 at Thesaurus (orig r6580): ribasushi | 2009-06-10 01:21:50 +0200
Move as_query out of the cursor
r6582 at Thesaurus (orig r6581): ribasushi | 2009-06-10 01:27:19 +0200
Think before commit
r6583 at Thesaurus (orig r6582): ribasushi | 2009-06-10 09:37:19 +0200
Clarify and disable rows/prefetch test - fix is easy, but architecturally unsound - need more time
r6591 at Thesaurus (orig r6590): ribasushi | 2009-06-10 13:33:37 +0200
r6544 at Thesaurus (orig r6543): ribasushi | 2009-06-08 11:44:59 +0200
Attempt to figure out why do we repeat joins on complex search_related
r6586 at Thesaurus (orig r6585): ribasushi | 2009-06-10 11:22:05 +0200
Move the rs preservation test to a more suitable place
r6589 at Thesaurus (orig r6588): ribasushi | 2009-06-10 13:15:48 +0200
Finally commit trully failing test
r6590 at Thesaurus (orig r6589): ribasushi | 2009-06-10 13:33:14 +0200
Duh, this was a pretty simple bug
r6593 at Thesaurus (orig r6592): ribasushi | 2009-06-10 13:43:31 +0200
What was I thinking - resultsource does not have an ->alias
r6598 at Thesaurus (orig r6597): ribasushi | 2009-06-10 14:48:39 +0200
Adjust changelog
r6601 at Thesaurus (orig r6600): ribasushi | 2009-06-10 15:50:43 +0200
Release 0.08104
r6615 at Thesaurus (orig r6614): ribasushi | 2009-06-11 14:29:48 +0200
Move around inflation tests
r6616 at Thesaurus (orig r6615): ribasushi | 2009-06-11 14:32:07 +0200
explicitly remove manifest on author mode make
r6617 at Thesaurus (orig r6616): ribasushi | 2009-06-11 15:02:41 +0200
IC::DT changes:
Switch SQLite storage to DT::F::SQLite
Fix exception when undef_if_invalid and timezone are both set on a column
Split t/89inflate_datetime into separate tests
Adjust makefile author dependencies
r6618 at Thesaurus (orig r6617): ribasushi | 2009-06-11 15:07:41 +0200
Move file_column test to inflate/ too
r6621 at Thesaurus (orig r6620): ribasushi | 2009-06-11 16:16:20 +0200
r5713 at Thesaurus (orig r5712): ribasushi | 2009-03-08 23:53:28 +0100
Branch for datatype-aware updates
r6604 at Thesaurus (orig r6603): ribasushi | 2009-06-10 18:08:25 +0200
Test for type-aware update
r6607 at Thesaurus (orig r6606): ribasushi | 2009-06-10 19:57:04 +0200
Datatype aware update works
r6609 at Thesaurus (orig r6608): ribasushi | 2009-06-10 20:06:40 +0200
Whoops
r6614 at Thesaurus (orig r6613): ribasushi | 2009-06-11 09:23:54 +0200
Add attribute doc
r6620 at Thesaurus (orig r6619): ribasushi | 2009-06-11 16:15:53 +0200
Use equality, not comparison
r6623 at Thesaurus (orig r6622): ribasushi | 2009-06-11 16:21:53 +0200
Changes
Property changes on: DBIx-Class/0.08/branches/mc_fixes
___________________________________________________________________
Name: svk:merge
- 168d5346-440b-0410-b799-f706be625ff1:/DBIx-Class-current:2207
462d4d0c-b505-0410-bf8e-ce8f877b3390:/local/bast/DBIx-Class:3159
4d5fae46-8e6a-4e08-abee-817e9fb894a2:/local/bast/DBIx-Class/0.08/branches/resultsetcolumn_custom_columns:5160
4d5fae46-8e6a-4e08-abee-817e9fb894a2:/local/bast/DBIx-Class/0.08/branches/sqla_1.50_compat:5414
4d5fae46-8e6a-4e08-abee-817e9fb894a2:/local/bast/DBIx-Class/0.08/trunk:5969
9c88509d-e914-0410-b01c-b9530614cbfe:/local/DBIx-Class:32260
9c88509d-e914-0410-b01c-b9530614cbfe:/local/DBIx-Class-CDBICompat:54993
9c88509d-e914-0410-b01c-b9530614cbfe:/vendor/DBIx-Class:31122
ab17426e-7cd3-4704-a2a2-80b7c0a611bb:/local/dbic_column_attr:10946
ab17426e-7cd3-4704-a2a2-80b7c0a611bb:/local/dbic_trunk:11142
bd5ac9a7-f185-4d95-9186-dbb8b392a572:/local/os/bast/DBIx-Class/0.08/trunk:2798
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/belongs_to_null_col_fix:5244
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/cdbicompat_integration:4160
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/column_attr:5074
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/complex_join_rels:4589
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/count_distinct:6218
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/diamond_relationships:6310
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/file_column:3920
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/fix-update-and-delete-as_query:6162
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/joined_count:6323
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/multi_stuff:5565
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/on_disconnect_do:3694
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/oracle-tweaks:6222
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/oracle_sequence:4173
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/order_by_refactor:6475
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/parser_fk_index:4485
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/prefetch:5699
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/replication_dedux:4600
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/rt_bug_41083:5437
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/savepoints:4223
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/sqla_1.50_compat:5321
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/storage-ms-access:4142
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/storage-tweaks:6262
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/subclassed_rsset:5930
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/subquery:5617
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/sybase:5651
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/sybase_mssql:6125
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/top_limit_altfix:6429
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/versioned_enhancements:4125
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/versioning:4578
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/views:5585
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/branches/DBIx-Class-C3:318
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/branches/DBIx-Class-current:2222
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/branches/DBIx-Class-joins:173
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/branches/DBIx-Class-resultset:570
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/branches/DBIx-Class/datetime:1716
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/branches/DBIx-Class/find_compat:1855
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/branches/DBIx-Class/find_unique_query_fixes:2142
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/branches/DBIx-Class/inflate:1988
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/branches/DBIx-Class/many_to_many:2025
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/branches/DBIx-Class/re_refactor_bugfix:1944
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/branches/DBIx-Class/reorganize_tests:1827
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/branches/DBIx-Class/resultset-new-refactor:1766
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/branches/DBIx-Class/resultset_2_electric_boogaloo:2175
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/branches/DBIx-Class/resultset_cleanup:2102
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/branches/DBIx-Class/sqlt_tests_refactor:2043
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/trunk/DBIx-Class:3606
fe160bb6-dc1c-0410-9f2b-d64a711b54a5:/local/DBIC-trunk-0.08:10510
+ 168d5346-440b-0410-b799-f706be625ff1:/DBIx-Class-current:2207
462d4d0c-b505-0410-bf8e-ce8f877b3390:/local/bast/DBIx-Class:3159
4d5fae46-8e6a-4e08-abee-817e9fb894a2:/local/bast/DBIx-Class/0.08/branches/resultsetcolumn_custom_columns:5160
4d5fae46-8e6a-4e08-abee-817e9fb894a2:/local/bast/DBIx-Class/0.08/branches/sqla_1.50_compat:5414
4d5fae46-8e6a-4e08-abee-817e9fb894a2:/local/bast/DBIx-Class/0.08/trunk:5969
9c88509d-e914-0410-b01c-b9530614cbfe:/local/DBIx-Class:32260
9c88509d-e914-0410-b01c-b9530614cbfe:/local/DBIx-Class-CDBICompat:54993
9c88509d-e914-0410-b01c-b9530614cbfe:/vendor/DBIx-Class:31122
ab17426e-7cd3-4704-a2a2-80b7c0a611bb:/local/dbic_column_attr:10946
ab17426e-7cd3-4704-a2a2-80b7c0a611bb:/local/dbic_trunk:11142
bd5ac9a7-f185-4d95-9186-dbb8b392a572:/local/os/bast/DBIx-Class/0.08/trunk:2798
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/belongs_to_null_col_fix:5244
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/cdbicompat_integration:4160
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/column_attr:5074
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/complex_join_rels:4589
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/count_distinct:6218
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/diamond_relationships:6310
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/file_column:3920
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/fix-update-and-delete-as_query:6162
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/joined_count:6323
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/multi_stuff:5565
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/mystery_join:6589
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/on_disconnect_do:3694
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/oracle-tweaks:6222
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/oracle_sequence:4173
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/order_by_refactor:6475
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/parser_fk_index:4485
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/prefetch:5699
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/replication_dedux:4600
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/rsrc_in_storage:6577
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/rt_bug_41083:5437
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/savepoints:4223
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/sqla_1.50_compat:5321
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/storage-ms-access:4142
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/storage-tweaks:6262
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/subclassed_rsset:5930
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/subquery:5617
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/sybase:5651
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/sybase_mssql:6125
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/top_limit_altfix:6429
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/type_aware_update:6619
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/versioned_enhancements:4125
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/versioning:4578
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/branches/views:5585
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/DBIx-Class/0.08/trunk:6622
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/branches/DBIx-Class-C3:318
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/branches/DBIx-Class-current:2222
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/branches/DBIx-Class-joins:173
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/branches/DBIx-Class-resultset:570
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/branches/DBIx-Class/datetime:1716
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/branches/DBIx-Class/find_compat:1855
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/branches/DBIx-Class/find_unique_query_fixes:2142
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/branches/DBIx-Class/inflate:1988
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/branches/DBIx-Class/many_to_many:2025
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/branches/DBIx-Class/re_refactor_bugfix:1944
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/branches/DBIx-Class/reorganize_tests:1827
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/branches/DBIx-Class/resultset-new-refactor:1766
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/branches/DBIx-Class/resultset_2_electric_boogaloo:2175
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/branches/DBIx-Class/resultset_cleanup:2102
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/branches/DBIx-Class/sqlt_tests_refactor:2043
bd8105ee-0ff8-0310-8827-fb3f25b6796d:/trunk/DBIx-Class:3606
fe160bb6-dc1c-0410-9f2b-d64a711b54a5:/local/DBIC-trunk-0.08:10510
Modified: DBIx-Class/0.08/branches/mc_fixes/Changes
===================================================================
--- DBIx-Class/0.08/branches/mc_fixes/Changes 2009-06-11 14:54:09 UTC (rev 6623)
+++ DBIx-Class/0.08/branches/mc_fixes/Changes 2009-06-11 14:54:49 UTC (rev 6624)
@@ -1,13 +1,35 @@
Revision history for DBIx::Class
+ - Update of numeric columns now properly uses != to determine
+ dirtyness instead of the usual eq
+ - Fixes to IC::DT tests
+ - Fixed exception when undef_if_invalid and timezone are both set on
+ an invalid datetime column
+
+0.08104 2009-06-10 13:38:00 (UTC)
- order_by now can take \[$sql, @bind] as in
order_by => { -desc => \['colA LIKE ?', 'somestring'] }
- SQL::Abstract errors are now properly croak()ed with the
correct trace
- populate() now properly reports the dataset slice in case of
an exception
- - fixed corner case when populate() erroneously falls back to
+ - Fixed corner case when populate() erroneously falls back to
create()
+ - Work around braindead mysql when doing subquery counts on
+ resultsets containing identically named columns from several
+ tables
+ - Fixed m2m add_to_$rel to invoke find_or_create on the far
+ side of the relation, to avoid duplicates
+ - DBIC now properly handles empty inserts (invoking all default
+ values from the DB, normally via INSERT INTO tbl DEFAULT VALUES
+ - Fix find_or_new/create to stop returning random rows when
+ default value insert is requested (RT#28875)
+ - Make IC::DT extra warning state the column name too
+ - It is now possible to transparrently search() on columns
+ requiring DBI bind (i.e. PostgreSQL BLOB)
+ - as_query is now a Storage::DBI method, so custom cursors can
+ be seamlessly used
+ - Fix search_related regression introduced in 0.08103
0.08103 2009-05-26 19:50:00 (UTC)
- Multiple $resultset -> count/update/delete fixes. Now any
Modified: DBIx-Class/0.08/branches/mc_fixes/Makefile.PL
===================================================================
--- DBIx-Class/0.08/branches/mc_fixes/Makefile.PL 2009-06-11 14:54:09 UTC (rev 6623)
+++ DBIx-Class/0.08/branches/mc_fixes/Makefile.PL 2009-06-11 14:54:49 UTC (rev 6624)
@@ -71,13 +71,19 @@
# t/52cycle.t
'Test::Memory::Cycle' => 0,
+ 'Devel::Cycle' => 1.10,
+ # t/inflate/datetime*.t
+ # t/72.pg
+ # t/36datetime.t
# t/60core.t
+ 'DateTime::Format::SQLite' => 0,
'DateTime::Format::MySQL' => 0,
-
- # t/89inflate_datetime.t
'DateTime::Format::Pg' => 0,
+ # t/96_is_deteministic_value.t
+ 'DateTime::Format::Strptime' => 0,
+
# t/72pg.t
$ENV{DBICTEST_PG_DSN}
? ('Sys::SigAction'=> 0)
@@ -91,8 +97,6 @@
'namespace::clean' => 0.11,
'Hash::Merge', => 0.11,
- # t/96_is_deteministic_value.t
- 'DateTime::Format::Strptime' => 0,
);
if ($Module::Install::AUTHOR) {
@@ -111,13 +115,17 @@
build_requires ($module => $force_requires_if_author{$module});
}
+ print "Regenerating README\n";
system('pod2text lib/DBIx/Class.pm > README');
+
+ if (-f 'MANIFEST') {
+ print "Removing MANIFEST\n";
+ unlink 'MANIFEST';
+ }
}
-auto_provides;
+auto_install();
-auto_install;
-
WriteAll();
# Re-write META.yml to _exclude_ all forced requires (we do not want to ship this)
Modified: DBIx-Class/0.08/branches/mc_fixes/lib/DBIx/Class/InflateColumn/DateTime.pm
===================================================================
--- DBIx-Class/0.08/branches/mc_fixes/lib/DBIx/Class/InflateColumn/DateTime.pm 2009-06-11 14:54:09 UTC (rev 6623)
+++ DBIx-Class/0.08/branches/mc_fixes/lib/DBIx/Class/InflateColumn/DateTime.pm 2009-06-11 14:54:49 UTC (rev 6624)
@@ -116,14 +116,14 @@
my $timezone;
if ( defined $info->{extra}{timezone} ) {
carp "Putting timezone into extra => { timezone => '...' } has been deprecated, ".
- "please put it directly into the columns definition.";
+ "please put it directly into the '$column' column definition.";
$timezone = $info->{extra}{timezone};
}
my $locale;
if ( defined $info->{extra}{locale} ) {
carp "Putting locale into extra => { locale => '...' } has been deprecated, ".
- "please put it directly into the columns definition.";
+ "please put it directly into the '$column' column definition.";
$locale = $info->{extra}{locale};
}
@@ -139,7 +139,7 @@
if (defined $info->{extra}{floating_tz_ok}) {
carp "Putting floating_tz_ok into extra => { floating_tz_ok => 1 } has been deprecated, ".
- "please put it directly into the columns definition.";
+ "please put it directly into the '$column' column definition.";
$info{floating_tz_ok} = $info->{extra}{floating_tz_ok};
}
@@ -148,9 +148,13 @@
{
inflate => sub {
my ($value, $obj) = @_;
+
my $dt = eval { $obj->_inflate_to_datetime( $value, \%info ) };
- $self->throw_exception ("Error while inflating ${value} for ${column} on ${self}: $@")
- if $@ and not $undef_if_invalid;
+ if (my $err = $@ ) {
+ return undef if ($undef_if_invalid);
+ $self->throw_exception ("Error while inflating ${value} for ${column} on ${self}: $err");
+ }
+
$dt->set_time_zone($timezone) if $timezone;
$dt->set_locale($locale) if $locale;
return $dt;
Modified: DBIx-Class/0.08/branches/mc_fixes/lib/DBIx/Class/ResultSet.pm
===================================================================
--- DBIx-Class/0.08/branches/mc_fixes/lib/DBIx/Class/ResultSet.pm 2009-06-11 14:54:09 UTC (rev 6623)
+++ DBIx-Class/0.08/branches/mc_fixes/lib/DBIx/Class/ResultSet.pm 2009-06-11 14:54:49 UTC (rev 6624)
@@ -661,7 +661,6 @@
my ($self) = @_;
my $attrs = $self->_resolved_attrs_copy;
- $attrs->{_virtual_order_by} = $self->_gen_virtual_order;
return $self->{cursor}
||= $self->result_source->storage->select($attrs->{from}, $attrs->{select},
@@ -714,7 +713,6 @@
}
my $attrs = $self->_resolved_attrs_copy;
- $attrs->{_virtual_order_by} = $self->_gen_virtual_order;
if ($where) {
if (defined $attrs->{where}) {
@@ -742,16 +740,7 @@
return (@data ? ($self->_construct_object(@data))[0] : undef);
}
-# _gen_virtual_order
-#
-# This is a horrble hack, but seems like the best we can do at this point
-# Some limit emulations (Top) require an ordered resultset in order to
-# function at all. So supply a PK order to be used if necessary
-sub _gen_virtual_order {
- return [ shift->result_source->primary_columns ];
-}
-
# _is_unique_query
#
# Try to determine if the specified query is guaranteed to be unique, based on
@@ -1329,7 +1318,7 @@
my $subrs = (ref $self)->new($rsrc, $attrs);
- return $self->result_source->storage->subq_update_delete($subrs, $op, $values);
+ return $self->result_source->storage->_subq_update_delete($subrs, $op, $values);
}
else {
return $rsrc->storage->$op(
@@ -1936,7 +1925,10 @@
=cut
-sub as_query { return shift->cursor->as_query(@_) }
+sub as_query {
+ my $self = shift;
+ return $self->result_source->storage->as_query($self->_resolved_attrs);
+}
=head2 find_or_new
@@ -1977,8 +1969,10 @@
my $self = shift;
my $attrs = (@_ > 1 && ref $_[$#_] eq 'HASH' ? pop(@_) : {});
my $hash = ref $_[0] eq 'HASH' ? shift : {@_};
- my $exists = $self->find($hash, $attrs);
- return defined $exists ? $exists : $self->new_result($hash);
+ if (keys %$hash and my $row = $self->find($hash, $attrs) ) {
+ return $row;
+ }
+ return $self->new_result($hash);
}
=head2 create
@@ -2108,8 +2102,10 @@
my $self = shift;
my $attrs = (@_ > 1 && ref $_[$#_] eq 'HASH' ? pop(@_) : {});
my $hash = ref $_[0] eq 'HASH' ? shift : {@_};
- my $exists = $self->find($hash, $attrs);
- return defined $exists ? $exists : $self->create($hash);
+ if (keys %$hash and my $row = $self->find($hash, $attrs) ) {
+ return $row;
+ }
+ return $self->create($hash);
}
=head2 update_or_create
@@ -2434,8 +2430,15 @@
my $source = $self->result_source;
my $attrs = $self->{attrs};
- my $from = $attrs->{from}
- || [ { $attrs->{alias} => $source->from } ];
+ my $from = [ @{
+ $attrs->{from}
+ ||
+ [{
+ -result_source => $source,
+ -alias => $attrs->{alias},
+ $attrs->{alias} => $source->from,
+ }]
+ }];
my $seen = { %{$attrs->{seen_join} || {} } };
@@ -2540,7 +2543,11 @@
push( @{ $attrs->{as} }, @$adds );
}
- $attrs->{from} ||= [ { $self->{attrs}{alias} => $source->from } ];
+ $attrs->{from} ||= [ {
+ -result_source => $source,
+ -alias => $self->{attrs}{alias},
+ $self->{attrs}{alias} => $source->from,
+ } ];
if ( exists $attrs->{join} || exists $attrs->{prefetch} ) {
my $join = delete $attrs->{join} || {};
@@ -2571,6 +2578,14 @@
$attrs->{order_by} = [];
}
+ # If the order_by is otherwise empty - we will use this for TOP limit
+ # emulation and the like.
+ # Although this is needed only if the order_by is not defined, it is
+ # actually cheaper to just populate this rather than properly examining
+ # order_by (stuf like [ {} ] and the like)
+ $attrs->{_virtual_order_by} = [ $self->result_source->primary_columns ];
+
+
my $collapse = $attrs->{collapse} || {};
if ( my $prefetch = delete $attrs->{prefetch} ) {
$prefetch = $self->_merge_attr( {}, $prefetch );
@@ -2613,7 +2628,7 @@
my $p = $paths;
$p = $p->{$_} ||= {} for @{$j->[0]{-join_path}};
- push @{$p->{-join_aliases} }, $j->[0]{-join_alias};
+ push @{$p->{-join_aliases} }, $j->[0]{-alias};
}
return $paths;
Modified: DBIx-Class/0.08/branches/mc_fixes/lib/DBIx/Class/ResultSource.pm
===================================================================
--- DBIx-Class/0.08/branches/mc_fixes/lib/DBIx/Class/ResultSource.pm 2009-06-11 14:54:09 UTC (rev 6623)
+++ DBIx-Class/0.08/branches/mc_fixes/lib/DBIx/Class/ResultSource.pm 2009-06-11 14:54:49 UTC (rev 6624)
@@ -116,6 +116,18 @@
when cloning objects using L<DBIx::Class::Row/copy>. It is also used by
L<DBIx::Class::Schema/deploy>.
+=item is_numeric
+
+Set this to a true or false value (not C<undef>) to explicitly specify
+if this column contains numeric data. This controls how set_column
+decides whether to consider a column dirty after an update: if
+C<is_numeric> is true a numeric comparison C<< != >> will take place
+instead of the usual C<eq>
+
+If not specified the storage class will attempt to figure this out on
+first access to the column, based on the column C<data_type>. The
+result will be cached in this attribute.
+
=item is_foreign_key
Set this to a true value for a column that contains a key from a
@@ -1120,10 +1132,13 @@
$type = $rel_info->{attrs}{join_type} || '';
$force_left->{force} = 1 if lc($type) eq 'left';
}
- return [ { $as => $self->related_source($join)->from,
+
+ my $rel_src = $self->related_source($join);
+ return [ { $as => $rel_src->from,
+ -result_source => $rel_src,
-join_type => $type,
-join_path => [@$jpath, $join],
- -join_alias => $as,
+ -alias => $as,
-relation_chain_depth => $seen->{-relation_chain_depth} || 0,
},
$self->_resolve_condition($rel_info->{cond}, $as, $alias) ];
Modified: DBIx-Class/0.08/branches/mc_fixes/lib/DBIx/Class/Row.pm
===================================================================
--- DBIx-Class/0.08/branches/mc_fixes/lib/DBIx/Class/Row.pm 2009-06-11 14:54:09 UTC (rev 6623)
+++ DBIx-Class/0.08/branches/mc_fixes/lib/DBIx/Class/Row.pm 2009-06-11 14:54:49 UTC (rev 6624)
@@ -769,9 +769,40 @@
my $old_value = $self->get_column($column);
$self->store_column($column, $new_value);
- $self->{_dirty_columns}{$column} = 1
- if (defined $old_value xor defined $new_value) || (defined $old_value && $old_value ne $new_value);
+ my $dirty;
+ if (defined $old_value xor defined $new_value) {
+ $dirty = 1;
+ }
+ elsif (not defined $old_value) { # both undef
+ $dirty = 0;
+ }
+ elsif ($old_value eq $new_value) {
+ $dirty = 0;
+ }
+ else { # do a numeric comparison if datatype allows it
+ my $colinfo = $self->column_info ($column);
+
+ # cache for speed
+ if (not defined $colinfo->{is_numeric}) {
+ $colinfo->{is_numeric} =
+ $self->result_source->schema->storage->is_datatype_numeric ($colinfo->{data_type})
+ ? 1
+ : 0
+ ;
+ }
+
+ if ($colinfo->{is_numeric}) {
+ $dirty = $old_value != $new_value;
+ }
+ else {
+ $dirty = 1;
+ }
+ }
+
+ # sadly the update code just checks for keys, not for their value
+ $self->{_dirty_columns}{$column} = 1 if $dirty;
+
# XXX clear out the relation cache for this column
delete $self->{related_resultsets}{$column};
Added: DBIx-Class/0.08/branches/mc_fixes/lib/DBIx/Class/SQLAHacks/MySQL.pm
===================================================================
--- DBIx-Class/0.08/branches/mc_fixes/lib/DBIx/Class/SQLAHacks/MySQL.pm (rev 0)
+++ DBIx-Class/0.08/branches/mc_fixes/lib/DBIx/Class/SQLAHacks/MySQL.pm 2009-06-11 14:54:49 UTC (rev 6624)
@@ -0,0 +1,24 @@
+package # Hide from PAUSE
+ DBIx::Class::SQLAHacks::MySQL;
+
+use base qw( DBIx::Class::SQLAHacks );
+use Carp::Clan qw/^DBIx::Class|^SQL::Abstract/;
+
+#
+# MySQL does not understand the standard INSERT INTO $table DEFAULT VALUES
+# Adjust SQL here instead
+#
+sub insert {
+ my $self = shift;
+
+ my $table = $_[0];
+ $table = $self->_quote($table) unless ref($table);
+
+ if (! $_[1] or (ref $_[1] eq 'HASH' and !keys %{$_[1]} ) ) {
+ return "INSERT INTO ${table} () VALUES ()"
+ }
+
+ return $self->SUPER::insert (@_);
+}
+
+1;
Modified: DBIx-Class/0.08/branches/mc_fixes/lib/DBIx/Class/SQLAHacks/OracleJoins.pm
===================================================================
--- DBIx-Class/0.08/branches/mc_fixes/lib/DBIx/Class/SQLAHacks/OracleJoins.pm 2009-06-11 14:54:09 UTC (rev 6623)
+++ DBIx-Class/0.08/branches/mc_fixes/lib/DBIx/Class/SQLAHacks/OracleJoins.pm 2009-06-11 14:54:49 UTC (rev 6624)
@@ -2,7 +2,7 @@
DBIx::Class::SQLAHacks::OracleJoins;
use base qw( DBIx::Class::SQLAHacks );
-use Carp::Clan qw/^DBIx::Class/;
+use Carp::Clan qw/^DBIx::Class|^SQL::Abstract/;
sub select {
my ($self, $table, $fields, $where, $order, @rest) = @_;
Modified: DBIx-Class/0.08/branches/mc_fixes/lib/DBIx/Class/SQLAHacks.pm
===================================================================
--- DBIx-Class/0.08/branches/mc_fixes/lib/DBIx/Class/SQLAHacks.pm 2009-06-11 14:54:09 UTC (rev 6623)
+++ DBIx-Class/0.08/branches/mc_fixes/lib/DBIx/Class/SQLAHacks.pm 2009-06-11 14:54:49 UTC (rev 6624)
@@ -193,6 +193,14 @@
my $self = shift;
my $table = shift;
$table = $self->_quote($table) unless ref($table);
+
+ # SQLA will emit INSERT INTO $table ( ) VALUES ( )
+ # which is sadly understood only by MySQL. Change default behavior here,
+ # until SQLA2 comes with proper dialect support
+ if (! $_[0] or (ref $_[0] eq 'HASH' and !keys %{$_[0]} ) ) {
+ return "INSERT INTO ${table} DEFAULT VALUES"
+ }
+
$self->SUPER::insert($table, @_);
}
Modified: DBIx-Class/0.08/branches/mc_fixes/lib/DBIx/Class/Storage/DBI/Cursor.pm
===================================================================
--- DBIx-Class/0.08/branches/mc_fixes/lib/DBIx/Class/Storage/DBI/Cursor.pm 2009-06-11 14:54:09 UTC (rev 6623)
+++ DBIx-Class/0.08/branches/mc_fixes/lib/DBIx/Class/Storage/DBI/Cursor.pm 2009-06-11 14:54:49 UTC (rev 6624)
@@ -49,32 +49,6 @@
return bless ($new, $class);
}
-=head2 as_query
-
-=over 4
-
-=item Arguments: none
-
-=item Return Value: \[ $sql, @bind ]
-
-=back
-
-Returns the SQL statement and bind vars associated with the invocant.
-
-=cut
-
-sub as_query {
- my $self = shift;
-
- my $storage = $self->{storage};
- my $sql_maker = $storage->sql_maker;
- local $sql_maker->{for};
-
- my @args = $storage->_select_args(@{$self->{args}});
- my ($sql, $bind) = $storage->_prep_for_execute(@args[0 .. 2], [@args[4 .. $#args]]);
- return \[ "($sql)", @{ $bind || [] }];
-}
-
=head2 next
=over 4
Modified: DBIx-Class/0.08/branches/mc_fixes/lib/DBIx/Class/Storage/DBI/SQLite.pm
===================================================================
--- DBIx-Class/0.08/branches/mc_fixes/lib/DBIx/Class/Storage/DBI/SQLite.pm 2009-06-11 14:54:09 UTC (rev 6623)
+++ DBIx-Class/0.08/branches/mc_fixes/lib/DBIx/Class/Storage/DBI/SQLite.pm 2009-06-11 14:54:49 UTC (rev 6624)
@@ -45,6 +45,8 @@
return $backupfile;
}
+sub datetime_parser_type { return "DateTime::Format::SQLite"; }
+
1;
=head1 NAME
Modified: DBIx-Class/0.08/branches/mc_fixes/lib/DBIx/Class/Storage/DBI/mysql.pm
===================================================================
--- DBIx-Class/0.08/branches/mc_fixes/lib/DBIx/Class/Storage/DBI/mysql.pm 2009-06-11 14:54:09 UTC (rev 6623)
+++ DBIx-Class/0.08/branches/mc_fixes/lib/DBIx/Class/Storage/DBI/mysql.pm 2009-06-11 14:54:49 UTC (rev 6624)
@@ -5,7 +5,7 @@
use base qw/DBIx::Class::Storage::DBI::MultiColumnIn/;
-# __PACKAGE__->load_components(qw/PK::Auto/);
+__PACKAGE__->sql_maker_class('DBIx::Class::SQLAHacks::MySQL');
sub with_deferred_fk_checks {
my ($self, $sub) = @_;
@@ -53,10 +53,23 @@
# MySql can not do subquery update/deletes, only way is slow per-row operations.
# This assumes you have set proper transaction isolation and use innodb.
-sub subq_update_delete {
+sub _subq_update_delete {
return shift->_per_row_update_delete (@_);
}
+# MySql chokes on things like:
+# COUNT(*) FROM (SELECT tab1.col, tab2.col FROM tab1 JOIN tab2 ... )
+# claiming that col is a duplicate column (it loses the table specifiers by
+# the time it gets to the *). Thus for any subquery count we select only the
+# primary keys of the main table in the inner query. This hopefully still
+# hits the indexes and keeps mysql happy.
+# (mysql does not care if the SELECT and the GROUP BY match)
+sub _grouped_count_select {
+ my ($self, $source, $rs_args) = @_;
+ my @pcols = map { join '.', $rs_args->{alias}, $_ } ($source->primary_columns);
+ return @pcols ? \@pcols : $rs_args->{group_by};
+}
+
1;
=head1 NAME
Modified: DBIx-Class/0.08/branches/mc_fixes/lib/DBIx/Class/Storage/DBI.pm
===================================================================
--- DBIx-Class/0.08/branches/mc_fixes/lib/DBIx/Class/Storage/DBI.pm 2009-06-11 14:54:09 UTC (rev 6623)
+++ DBIx-Class/0.08/branches/mc_fixes/lib/DBIx/Class/Storage/DBI.pm 2009-06-11 14:54:49 UTC (rev 6624)
@@ -7,10 +7,9 @@
use warnings;
use Carp::Clan qw/^DBIx::Class/;
use DBI;
-use DBIx::Class::SQLAHacks;
use DBIx::Class::Storage::DBI::Cursor;
use DBIx::Class::Storage::Statistics;
-use Scalar::Util qw/blessed weaken/;
+use Scalar::Util();
use List::Util();
__PACKAGE__->mk_group_accessors('simple' =>
@@ -603,6 +602,7 @@
my ($self) = @_;
unless ($self->_sql_maker) {
my $sql_maker_class = $self->sql_maker_class;
+ $self->ensure_class_loaded ($sql_maker_class);
$self->_sql_maker($sql_maker_class->new( $self->_sql_maker_args ));
}
return $self->_sql_maker;
@@ -717,7 +717,7 @@
if($dbh && !$self->unsafe) {
my $weak_self = $self;
- weaken($weak_self);
+ Scalar::Util::weaken($weak_self);
$dbh->{HandleError} = sub {
if ($weak_self) {
$weak_self->throw_exception("DBI Exception: $_[0]");
@@ -898,7 +898,7 @@
sub _prep_for_execute {
my ($self, $op, $extra_bind, $ident, $args) = @_;
- if( blessed($ident) && $ident->isa("DBIx::Class::ResultSource") ) {
+ if( Scalar::Util::blessed($ident) && $ident->isa("DBIx::Class::ResultSource") ) {
$ident = $ident->from();
}
@@ -910,6 +910,38 @@
return ($sql, \@bind);
}
+=head2 as_query
+
+=over 4
+
+=item Arguments: $rs_attrs
+
+=item Return Value: \[ $sql, @bind ]
+
+=back
+
+Returns the SQL statement and bind vars that would result from the given
+ResultSet attributes (does not actually run a query)
+
+=cut
+
+sub as_query {
+ my ($self, $rs_attr) = @_;
+
+ my $sql_maker = $self->sql_maker;
+ local $sql_maker->{for};
+
+ # my ($op, $bind, $ident, $bind_attrs, $select, $cond, $order, $rows, $offset) = $self->_select_args(...);
+ my @args = $self->_select_args($rs_attr->{from}, $rs_attr->{select}, $rs_attr->{where}, $rs_attr);
+
+ # my ($sql, $bind) = $self->_prep_for_execute($op, $bind, $ident, [ $select, $cond, $order, $rows, $offset ]);
+ my ($sql, $bind) = $self->_prep_for_execute(
+ @args[0 .. 2],
+ [ @args[4 .. $#args] ],
+ );
+ return \[ "($sql)", @{ $bind || [] }];
+}
+
sub _fix_bind_params {
my ($self, @bind) = @_;
@@ -931,7 +963,7 @@
if ( $self->debug ) {
@bind = $self->_fix_bind_params(@bind);
-
+
$self->debugobj->query_start( $sql, @bind );
}
}
@@ -990,8 +1022,8 @@
sub insert {
my ($self, $source, $to_insert) = @_;
-
- my $ident = $source->from;
+
+ my $ident = $source->from;
my $bind_attributes = $self->source_bind_attributes($source);
my $updated_cols = {};
@@ -1092,7 +1124,7 @@
my $self = shift @_;
my $source = shift @_;
- my $bind_attrs = {}; ## If ever it's needed...
+ my $bind_attrs = $self->source_bind_attributes($source);
return $self->_execute('delete' => [], $source, $bind_attrs, @_);
}
@@ -1104,7 +1136,7 @@
# Genarating a single PK column subquery is trivial and supported
# by all RDBMS. However if we have a multicolumn PK, things get ugly.
# Look at _multipk_update_delete()
-sub subq_update_delete {
+sub _subq_update_delete {
my $self = shift;
my ($rs, $op, $values) = @_;
@@ -1197,23 +1229,41 @@
sub _select_args {
my ($self, $ident, $select, $condition, $attrs) = @_;
- my $order = $attrs->{order_by};
my $for = delete $attrs->{for};
my $sql_maker = $self->sql_maker;
$sql_maker->{for} = $for;
- my @in_order_attrs = qw/group_by having _virtual_order_by/;
- if (List::Util::first { exists $attrs->{$_} } (@in_order_attrs) ) {
- $order = {
- ($order
- ? (order_by => $order)
- : ()
- ),
- ( map { $_ => $attrs->{$_} } (@in_order_attrs) )
- };
+ my $order = { map
+ { $attrs->{$_} ? ( $_ => $attrs->{$_} ) : () }
+ (qw/order_by group_by having _virtual_order_by/ )
+ };
+
+
+ my $bind_attrs = {};
+
+ my $alias2source = $self->_resolve_ident_sources ($ident);
+
+ for my $alias (keys %$alias2source) {
+ my $bindtypes = $self->source_bind_attributes ($alias2source->{$alias}) || {};
+ for my $col (keys %$bindtypes) {
+
+ my $fqcn = join ('.', $alias, $col);
+ $bind_attrs->{$fqcn} = $bindtypes->{$col} if $bindtypes->{$col};
+
+ # so that unqualified searches can be bound too
+ $bind_attrs->{$col} = $bind_attrs->{$fqcn} if $alias eq 'me';
+ }
}
- my $bind_attrs = {}; ## Future support
+
+ # This would be the point to deflate anything found in $condition
+ # (and leave $attrs->{bind} intact). Problem is - inflators historically
+ # expect a row object. And all we have is a resultsource (it is trivial
+ # to extract deflator coderefs via $alias2source above).
+ #
+ # I don't see a way forward other than changing the way deflators are
+ # invoked, and that's just bad...
+
my @args = ('select', $attrs->{bind}, $ident, $bind_attrs, $select, $condition, $order);
if ($attrs->{software_limit} ||
$sql_maker->_default_limit_syntax eq "GenericSubQ") {
@@ -1229,6 +1279,36 @@
return @args;
}
+sub _resolve_ident_sources {
+ my ($self, $ident) = @_;
+
+ my $alias2source = {};
+
+ # the reason this is so contrived is that $ident may be a {from}
+ # structure, specifying multiple tables to join
+ if ( Scalar::Util::blessed($ident) && $ident->isa("DBIx::Class::ResultSource") ) {
+ # this is compat mode for insert/update/delete which do not deal with aliases
+ $alias2source->{me} = $ident;
+ }
+ elsif (ref $ident eq 'ARRAY') {
+
+ for (@$ident) {
+ my $tabinfo;
+ if (ref $_ eq 'HASH') {
+ $tabinfo = $_;
+ }
+ if (ref $_ eq 'ARRAY' and ref $_->[0] eq 'HASH') {
+ $tabinfo = $_->[0];
+ }
+
+ $alias2source->{$tabinfo->{-alias}} = $tabinfo->{-result_source}
+ if ($tabinfo->{-result_source});
+ }
+ }
+
+ return $alias2source;
+}
+
sub count {
my ($self, $source, $attrs) = @_;
@@ -1269,7 +1349,7 @@
}
$sub_attrs->{group_by} ||= [ map { "$attrs->{alias}.$_" } ($source->primary_columns) ];
- $sub_attrs->{select} = $self->_grouped_count_select ($sub_attrs);
+ $sub_attrs->{select} = $self->_grouped_count_select ($source, $sub_attrs);
$attrs->{from} = [{
count_subq => $source->resultset_class->new ($source, $sub_attrs )->as_query
@@ -1288,8 +1368,8 @@
# choke in various ways.
#
sub _grouped_count_select {
- my ($self, $attrs) = @_;
- return $attrs->{group_by};
+ my ($self, $source, $rs_args) = @_;
+ return $rs_args->{group_by};
}
sub source_bind_attributes {
@@ -1479,6 +1559,27 @@
return;
}
+=head2 is_datatype_numeric
+
+Given a datatype from column_info, returns a boolean value indicating if
+the current RDBMS considers it a numeric value. This controls how
+L<DBIx::Class::Row/set_column> decides whether to mark the column as
+dirty - when the datatype is deemed numeric a C<< != >> comparison will
+be performed instead of the usual C<eq>.
+
+=cut
+
+sub is_datatype_numeric {
+ my ($self, $dt) = @_;
+
+ return 0 unless $dt;
+
+ return $dt =~ /^ (?:
+ numeric | int(?:eger)? | (?:tiny|small|medium|big)int | dec(?:imal)? | real | float | double (?: \s+ precision)? | (?:big)?serial
+ ) $/ix;
+}
+
+
=head2 create_ddl_dir (EXPERIMENTAL)
=over 4
Modified: DBIx-Class/0.08/branches/mc_fixes/lib/DBIx/Class.pm
===================================================================
--- DBIx-Class/0.08/branches/mc_fixes/lib/DBIx/Class.pm 2009-06-11 14:54:09 UTC (rev 6623)
+++ DBIx-Class/0.08/branches/mc_fixes/lib/DBIx/Class.pm 2009-06-11 14:54:49 UTC (rev 6624)
@@ -24,7 +24,7 @@
# i.e. first release of 0.XX *must* be 0.XX000. This avoids fBSD ports
# brain damage and presumably various other packaging systems too
-$VERSION = '0.08103';
+$VERSION = '0.08104';
$VERSION = eval $VERSION; # numify for warning-free dev releases
Modified: DBIx-Class/0.08/branches/mc_fixes/t/03podcoverage.t
===================================================================
--- DBIx-Class/0.08/branches/mc_fixes/t/03podcoverage.t 2009-06-11 14:54:09 UTC (rev 6623)
+++ DBIx-Class/0.08/branches/mc_fixes/t/03podcoverage.t 2009-06-11 14:54:49 UTC (rev 6624)
@@ -116,6 +116,7 @@
'DBIx::Class::Storage::DBI::Pg' => { skip => 1 },
'DBIx::Class::Storage::DBI::SQLite' => { skip => 1 },
'DBIx::Class::Storage::DBI::mysql' => { skip => 1 },
+ 'DBIx::Class::SQLAHacks::MySQL' => { skip => 1 },
'SQL::Translator::Parser::DBIx::Class' => { skip => 1 },
'SQL::Translator::Producer::DBIx::Class::File' => { skip => 1 },
Added: DBIx-Class/0.08/branches/mc_fixes/t/18insert_default.t
===================================================================
--- DBIx-Class/0.08/branches/mc_fixes/t/18insert_default.t (rev 0)
+++ DBIx-Class/0.08/branches/mc_fixes/t/18insert_default.t 2009-06-11 14:54:49 UTC (rev 6624)
@@ -0,0 +1,31 @@
+use strict;
+use warnings;
+
+use Test::More;
+use lib qw(t/lib);
+use DBICTest;
+
+my $tests = 3;
+plan tests => $tests;
+
+my $schema = DBICTest->init_schema();
+my $rs = $schema->resultset ('Artist');
+my $last_obj = $rs->search ({}, { order_by => { -desc => 'artistid' }, rows => 1})->single;
+my $last_id = $last_obj ? $last_obj->artistid : 0;
+
+my $obj;
+eval { $obj = $rs->create ({}) };
+my $err = $@;
+
+ok ($obj, 'Insert defaults ( $rs->create ({}) )' );
+SKIP: {
+ skip "Default insert failed: $err", $tests-1 if $err;
+
+ # this should be picked up without calling the DB again
+ is ($obj->artistid, $last_id + 1, 'Autoinc PK works');
+
+ # for this we need to refresh
+ $obj->discard_changes;
+ is ($obj->rank, 13, 'Default value works');
+}
+
Modified: DBIx-Class/0.08/branches/mc_fixes/t/36datetime.t
===================================================================
--- DBIx-Class/0.08/branches/mc_fixes/t/36datetime.t 2009-06-11 14:54:09 UTC (rev 6623)
+++ DBIx-Class/0.08/branches/mc_fixes/t/36datetime.t 2009-06-11 14:54:49 UTC (rev 6624)
@@ -5,9 +5,8 @@
use lib qw(t/lib);
use DBICTest;
-eval { require DateTime::Format::MySQL };
-
-plan $@ ? ( skip_all => 'Requires DateTime::Format::MySQL' )
+eval { require DateTime::Format::SQLite };
+plan $@ ? ( skip_all => 'Requires DateTime::Format::SQLite' )
: ( tests => 3 );
my $schema = DBICTest->init_schema(
@@ -24,8 +23,6 @@
my $parser = $schema->storage->datetime_parser();
-# We're currently expecting a MySQL parser. May change in future.
-is($parser, 'DateTime::Format::MySQL', 'Got expected datetime_parser');
-
+is($parser, 'DateTime::Format::SQLite', 'Got expected storage-set datetime_parser');
isa_ok($schema->storage, 'DBIx::Class::Storage::DBI::SQLite', 'storage');
Modified: DBIx-Class/0.08/branches/mc_fixes/t/52cycle.t
===================================================================
--- DBIx-Class/0.08/branches/mc_fixes/t/52cycle.t 2009-06-11 14:54:09 UTC (rev 6623)
+++ DBIx-Class/0.08/branches/mc_fixes/t/52cycle.t 2009-06-11 14:54:49 UTC (rev 6624)
@@ -5,9 +5,9 @@
use lib qw(t/lib);
BEGIN {
- eval { require Test::Memory::Cycle };
- if ($@) {
- plan skip_all => "leak test needs Test::Memory::Cycle";
+ eval { require Test::Memory::Cycle; require Devel::Cycle };
+ if ($@ or Devel::Cycle->VERSION < 1.10) {
+ plan skip_all => "leak test needs Test::Memory::Cycle and Devel::Cycle >= 1.10";
} else {
plan tests => 1;
}
Deleted: DBIx-Class/0.08/branches/mc_fixes/t/68inflate.t
===================================================================
--- DBIx-Class/0.08/branches/mc_fixes/t/68inflate.t 2009-06-11 14:54:09 UTC (rev 6623)
+++ DBIx-Class/0.08/branches/mc_fixes/t/68inflate.t 2009-06-11 14:54:49 UTC (rev 6624)
@@ -1,112 +0,0 @@
-use strict;
-use warnings;
-
-use Test::More;
-use lib qw(t/lib);
-use DBICTest;
-
-my $schema = DBICTest->init_schema();
-
-eval { require DateTime };
-plan skip_all => "Need DateTime for inflation tests" if $@;
-
-plan tests => 21;
-
-$schema->class('CD')
-#DBICTest::Schema::CD
-->inflate_column( 'year',
- { inflate => sub { DateTime->new( year => shift ) },
- deflate => sub { shift->year } }
-);
-Class::C3->reinitialize;
-
-# inflation test
-my $cd = $schema->resultset("CD")->find(3);
-
-is( ref($cd->year), 'DateTime', 'year is a DateTime, ok' );
-
-is( $cd->year->year, 1997, 'inflated year ok' );
-
-is( $cd->year->month, 1, 'inflated month ok' );
-
-eval { $cd->year(\'year +1'); };
-ok(!$@, 'updated year using a scalarref');
-$cd->update();
-$cd->discard_changes();
-
-is( ref($cd->year), 'DateTime', 'year is still a DateTime, ok' );
-
-is( $cd->year->year, 1998, 'updated year, bypassing inflation' );
-
-is( $cd->year->month, 1, 'month is still 1' );
-
-# get_inflated_column test
-
-is( ref($cd->get_inflated_column('year')), 'DateTime', 'get_inflated_column produces a DateTime');
-
-# deflate test
-my $now = DateTime->now;
-$cd->year( $now );
-$cd->update;
-
-$cd = $schema->resultset("CD")->find(3);
-is( $cd->year->year, $now->year, 'deflate ok' );
-
-# set_inflated_column test
-eval { $cd->set_inflated_column('year', $now) };
-ok(!$@, 'set_inflated_column with DateTime object');
-$cd->update;
-
-$cd = $schema->resultset("CD")->find(3);
-is( $cd->year->year, $now->year, 'deflate ok' );
-
-$cd = $schema->resultset("CD")->find(3);
-my $before_year = $cd->year->year;
-eval { $cd->set_inflated_column('year', \'year + 1') };
-ok(!$@, 'set_inflated_column to "year + 1"');
-$cd->update;
-
-$cd = $schema->resultset("CD")->find(3);
-is( $cd->year->year, $before_year+1, 'deflate ok' );
-
-# store_inflated_column test
-$cd = $schema->resultset("CD")->find(3);
-eval { $cd->store_inflated_column('year', $now) };
-ok(!$@, 'store_inflated_column with DateTime object');
-$cd->update;
-
-is( $cd->year->year, $now->year, 'deflate ok' );
-
-# update tests
-$cd = $schema->resultset("CD")->find(3);
-eval { $cd->update({'year' => $now}) };
-ok(!$@, 'update using DateTime object ok');
-is($cd->year->year, $now->year, 'deflate ok');
-
-$cd = $schema->resultset("CD")->find(3);
-$before_year = $cd->year->year;
-eval { $cd->update({'year' => \'year + 1'}) };
-ok(!$@, 'update using scalarref ok');
-
-$cd = $schema->resultset("CD")->find(3);
-is($cd->year->year, $before_year + 1, 'deflate ok');
-
-# discard_changes test
-$cd = $schema->resultset("CD")->find(3);
-# inflate the year
-$before_year = $cd->year->year;
-$cd->update({ year => \'year + 1'});
-$cd->discard_changes;
-
-is($cd->year->year, $before_year + 1, 'discard_changes clears the inflated value');
-
-my $copy = $cd->copy({ year => $now, title => "zemoose" });
-
-isnt( $copy->year->year, $before_year, "copy" );
-
-# eval { $cd->store_inflated_column('year', \'year + 1') };
-# print STDERR "ERROR: $@" if($@);
-# ok(!$@, 'store_inflated_column to "year + 1"');
-
-# is_deeply( $cd->year, \'year + 1', 'deflate ok' );
-
Deleted: DBIx-Class/0.08/branches/mc_fixes/t/68inflate_resultclass_hashrefinflator.t
===================================================================
--- DBIx-Class/0.08/branches/mc_fixes/t/68inflate_resultclass_hashrefinflator.t 2009-06-11 14:54:09 UTC (rev 6623)
+++ DBIx-Class/0.08/branches/mc_fixes/t/68inflate_resultclass_hashrefinflator.t 2009-06-11 14:54:49 UTC (rev 6624)
@@ -1,126 +0,0 @@
-use strict;
-use warnings;
-
-use Test::More qw(no_plan);
-use lib qw(t/lib);
-use DBICTest;
-my $schema = DBICTest->init_schema();
-
-
-# Under some versions of SQLite if the $rs is left hanging around it will lock
-# So we create a scope here cos I'm lazy
-{
- my $rs = $schema->resultset('CD');
-
- # get the defined columns
- my @dbic_cols = sort $rs->result_source->columns;
-
- # use the hashref inflator class as result class
- $rs->result_class('DBIx::Class::ResultClass::HashRefInflator');
-
- # fetch first record
- my $datahashref1 = $rs->first;
-
- my @hashref_cols = sort keys %$datahashref1;
-
- is_deeply( \@dbic_cols, \@hashref_cols, 'returned columns' );
-}
-
-
-sub check_cols_of {
- my ($dbic_obj, $datahashref) = @_;
-
- foreach my $col (keys %$datahashref) {
- # plain column
- if (not ref ($datahashref->{$col}) ) {
- is ($datahashref->{$col}, $dbic_obj->get_column($col), 'same value');
- }
- # related table entry (belongs_to)
- elsif (ref ($datahashref->{$col}) eq 'HASH') {
- check_cols_of($dbic_obj->$col, $datahashref->{$col});
- }
- # multiple related entries (has_many)
- elsif (ref ($datahashref->{$col}) eq 'ARRAY') {
- my @dbic_reltable = $dbic_obj->$col;
- my @hashref_reltable = @{$datahashref->{$col}};
-
- is (scalar @hashref_reltable, scalar @dbic_reltable, 'number of related entries');
-
- # for my $index (0..scalar @hashref_reltable) {
- for my $index (0..scalar @dbic_reltable) {
- my $dbic_reltable_obj = $dbic_reltable[$index];
- my $hashref_reltable_entry = $hashref_reltable[$index];
-
- check_cols_of($dbic_reltable_obj, $hashref_reltable_entry);
- }
- }
- }
-}
-
-# create a cd without tracks for testing empty has_many relationship
-$schema->resultset('CD')->create({ title => 'Silence is golden', artist => 3, year => 2006 });
-
-# order_by to ensure both resultsets have the rows in the same order
-# also check result_class-as-an-attribute syntax
-my $rs_dbic = $schema->resultset('CD')->search(undef,
- {
- prefetch => [ qw/ artist tracks / ],
- order_by => [ 'me.cdid', 'tracks.position' ],
- }
-);
-my $rs_hashrefinf = $schema->resultset('CD')->search(undef,
- {
- prefetch => [ qw/ artist tracks / ],
- order_by => [ 'me.cdid', 'tracks.position' ],
- result_class => 'DBIx::Class::ResultClass::HashRefInflator',
- }
-);
-
-my @dbic = $rs_dbic->all;
-my @hashrefinf = $rs_hashrefinf->all;
-
-for my $index (0 .. $#hashrefinf) {
- my $dbic_obj = $dbic[$index];
- my $datahashref = $hashrefinf[$index];
-
- check_cols_of($dbic_obj, $datahashref);
-}
-
-# sometimes for ultra-mega-speed you want to fetch columns in esoteric ways
-# check the inflator over a non-fetching join
-$rs_dbic = $schema->resultset ('Artist')->search ({ 'me.artistid' => 1}, {
- prefetch => { cds => 'tracks' },
- order_by => [qw/cds.cdid tracks.trackid/],
-});
-
-$rs_hashrefinf = $schema->resultset ('Artist')->search ({ 'me.artistid' => 1}, {
- join => { cds => 'tracks' },
- select => [qw/name tracks.title tracks.cd /],
- as => [qw/name cds.tracks.title cds.tracks.cd /],
- order_by => [qw/cds.cdid tracks.trackid/],
- result_class => 'DBIx::Class::ResultClass::HashRefInflator',
-});
-
- at dbic = map { $_->tracks->all } ($rs_dbic->first->cds->all);
- at hashrefinf = $rs_hashrefinf->all;
-
-is (scalar @dbic, scalar @hashrefinf, 'Equal number of tracks fetched');
-
-for my $index (0 .. $#hashrefinf) {
- my $track = $dbic[$index];
- my $datahashref = $hashrefinf[$index];
-
- is ($track->cd->artist->name, $datahashref->{name}, 'Brought back correct artist');
- for my $col (keys %{$datahashref->{cds}{tracks}}) {
- is ($track->get_column ($col), $datahashref->{cds}{tracks}{$col}, "Correct track '$col'");
- }
-}
-
-# check for same query as above but using extended columns syntax
-$rs_hashrefinf = $schema->resultset ('Artist')->search ({ 'me.artistid' => 1}, {
- join => { cds => 'tracks' },
- columns => {name => 'name', 'cds.tracks.title' => 'tracks.title', 'cds.tracks.cd' => 'tracks.cd'},
- order_by => [qw/cds.cdid tracks.trackid/],
-});
-$rs_hashrefinf->result_class('DBIx::Class::ResultClass::HashRefInflator');
-is_deeply [$rs_hashrefinf->all], \@hashrefinf, 'Check query using extended columns syntax';
Deleted: DBIx-Class/0.08/branches/mc_fixes/t/68inflate_serialize.t
===================================================================
--- DBIx-Class/0.08/branches/mc_fixes/t/68inflate_serialize.t 2009-06-11 14:54:09 UTC (rev 6623)
+++ DBIx-Class/0.08/branches/mc_fixes/t/68inflate_serialize.t 2009-06-11 14:54:49 UTC (rev 6624)
@@ -1,86 +0,0 @@
-use strict;
-use warnings;
-
-use Test::More;
-use lib qw(t/lib);
-use DBICTest;
-
-my $schema = DBICTest->init_schema();
-
-use Data::Dumper;
-
-my @serializers = (
- { module => 'YAML.pm',
- inflater => sub { YAML::Load (shift) },
- deflater => sub { die "Expecting a reference" unless (ref $_[0]); YAML::Dump (shift) },
- },
- { module => 'Storable.pm',
- inflater => sub { Storable::thaw (shift) },
- deflater => sub { die "Expecting a reference" unless (ref $_[0]); Storable::nfreeze (shift) },
- },
-);
-
-
-my $selected;
-foreach my $serializer (@serializers) {
- eval { require $serializer->{module} };
- unless ($@) {
- $selected = $serializer;
- last;
- }
-}
-
-plan (skip_all => "No suitable serializer found") unless $selected;
-
-plan (tests => 8);
-DBICTest::Schema::Serialized->inflate_column( 'serialized',
- { inflate => $selected->{inflater},
- deflate => $selected->{deflater},
- },
-);
-Class::C3->reinitialize;
-
-my $struct_hash = {
- a => 1,
- b => [
- { c => 2 },
- ],
- d => 3,
-};
-
-my $struct_array = [
- 'a',
- {
- b => 1,
- c => 2
- },
- 'd',
-];
-
-my $rs = $schema->resultset('Serialized');
-my $inflated;
-
-#======= testing hashref serialization
-
-my $object = $rs->create( {
- id => 1,
- serialized => '',
-} );
-ok($object->update( { serialized => $struct_hash } ), 'hashref deflation');
-ok($inflated = $object->serialized, 'hashref inflation');
-is_deeply($inflated, $struct_hash, 'inflated hash matches original');
-
-$object = $rs->create( {
- id => 2,
- serialized => '',
-} );
-eval { $object->set_inflated_column('serialized', $struct_hash) };
-ok(!$@, 'set_inflated_column to a hashref');
-is_deeply($object->serialized, $struct_hash, 'inflated hash matches original');
-
-
-#====== testing arrayref serialization
-
-ok($object->update( { serialized => $struct_array } ), 'arrayref deflation');
-ok($inflated = $object->serialized, 'arrayref inflation');
-is_deeply($inflated, $struct_array, 'inflated array matches original');
Modified: DBIx-Class/0.08/branches/mc_fixes/t/71mysql.t
===================================================================
--- DBIx-Class/0.08/branches/mc_fixes/t/71mysql.t 2009-06-11 14:54:09 UTC (rev 6623)
+++ DBIx-Class/0.08/branches/mc_fixes/t/71mysql.t 2009-06-11 14:54:49 UTC (rev 6624)
@@ -14,7 +14,7 @@
plan skip_all => 'Set $ENV{DBICTEST_MYSQL_DSN}, _USER and _PASS to run this test'
unless ($dsn && $user);
-plan tests => 23;
+plan tests => 19;
my $schema = DBICTest::Schema->connect($dsn, $user, $pass);
@@ -114,7 +114,7 @@
# (mysql doesn't seem to like subqueries with equally named columns)
#
-SKIP: {
+{
# try a ->has_many direction (due to a 'multi' accessor the select/group_by group is collapsed)
my $owners = $schema->resultset ('Owners')->search (
{ 'books.id' => { '!=', undef }},
@@ -122,41 +122,19 @@
);
my $owners2 = $schema->resultset ('Owners')->search ({ id => { -in => $owners->get_column ('me.id')->as_query }});
for ($owners, $owners2) {
- lives_ok { is ($_->all, 2, 'Prefetched grouped search returns correct number of rows') }
- || skip ('No test due to exception', 1);
- lives_ok { is ($_->count, 2, 'Prefetched grouped search returns correct count') }
- || skip ('No test due to exception', 1);
+ is ($_->all, 2, 'Prefetched grouped search returns correct number of rows');
+ is ($_->count, 2, 'Prefetched grouped search returns correct count');
}
- TODO: {
- # try a ->prefetch direction (no select collapse)
- my $books = $schema->resultset ('BooksInLibrary')->search (
- { 'owner.name' => 'wiggle' },
- { prefetch => 'owner', distinct => 1 }
- );
-
- local $TODO = 'MySQL is crazy - there seems to be no way to make this work';
- # error thrown is:
- # Duplicate column name 'id' [for Statement "
- # SELECT COUNT( * )
- # FROM (
- # SELECT me.id, me.source, me.owner, me.title, me.price, owner.id, owner.name
- # FROM books me
- # JOIN owners owner ON owner.id = me.owner
- # WHERE ( ( owner.name = ? AND source = ? ) )
- # GROUP BY me.id, me.source, me.owner, me.title, me.price, owner.id, owner.name
- # ) count_subq
- # " with ParamValues: 0='wiggle', 1='Library']
- #
- # go fucking figure
-
- my $books2 = $schema->resultset ('BooksInLibrary')->search ({ id => { -in => $books->get_column ('me.id')->as_query }});
- for ($books, $books2) {
- lives_ok { is ($_->all, 1, 'Prefetched grouped search returns correct number of rows') }
- || skip ('No test due to exception', 1);
- lives_ok { is ($_->count, 1, 'Prefetched grouped search returns correct count') }
- || skip ('No test due to exception', 1);
- }
+ # try a ->belongs_to direction (no select collapse)
+ my $books = $schema->resultset ('BooksInLibrary')->search (
+ { 'owner.name' => 'wiggle' },
+ { prefetch => 'owner', distinct => 1 }
+ );
+ my $books2 = $schema->resultset ('BooksInLibrary')->search ({ id => { -in => $books->get_column ('me.id')->as_query }});
+ for ($books, $books2) {
+ is ($_->all, 1, 'Prefetched grouped search returns correct number of rows');
+ is ($_->count, 1, 'Prefetched grouped search returns correct count');
}
}
Modified: DBIx-Class/0.08/branches/mc_fixes/t/86sqlt.t
===================================================================
--- DBIx-Class/0.08/branches/mc_fixes/t/86sqlt.t 2009-06-11 14:54:09 UTC (rev 6623)
+++ DBIx-Class/0.08/branches/mc_fixes/t/86sqlt.t 2009-06-11 14:54:49 UTC (rev 6624)
@@ -210,7 +210,7 @@
'name' => 'bookmark_fk_link', 'index_name' => 'bookmark_idx_link',
'selftable' => 'bookmark', 'foreigntable' => 'link',
'selfcols' => ['link'], 'foreigncols' => ['id'],
- on_delete => '', on_update => '', deferrable => 1,
+ on_delete => 'SET NULL', on_update => 'CASCADE', deferrable => 1,
},
],
# ForceForeign
Deleted: DBIx-Class/0.08/branches/mc_fixes/t/89inflate_datetime.t
===================================================================
--- DBIx-Class/0.08/branches/mc_fixes/t/89inflate_datetime.t 2009-06-11 14:54:09 UTC (rev 6623)
+++ DBIx-Class/0.08/branches/mc_fixes/t/89inflate_datetime.t 2009-06-11 14:54:49 UTC (rev 6624)
@@ -1,162 +0,0 @@
-use strict;
-use warnings;
-
-use Test::More;
-use lib qw(t/lib);
-use DBICTest;
-
-{
- local $SIG{__WARN__} = sub { warn @_ if $_[0] !~ /extra \=\> .+? has been deprecated/ };
- DBICTest::Schema->load_classes('EventTZDeprecated');
- DBICTest::Schema->load_classes('EventTZPg');
-}
-
-my $schema = DBICTest->init_schema();
-
-plan tests => 53;
-
-SKIP: {
- eval { require DateTime::Format::MySQL };
- skip "Need DateTime::Format::MySQL for inflation tests", 50 if $@;
-
-
-# inflation test
-my $event = $schema->resultset("Event")->find(1);
-
-isa_ok($event->starts_at, 'DateTime', 'DateTime returned');
-
-# klunky, but makes older Test::More installs happy
-my $starts = $event->starts_at;
-is("$starts", '2006-04-25T22:24:33', 'Correct date/time');
-
-# create using DateTime
-my $created = $schema->resultset('Event')->create({
- starts_at => DateTime->new(year=>2006, month=>6, day=>18),
- created_on => DateTime->new(year=>2006, month=>6, day=>23)
-});
-my $created_start = $created->starts_at;
-
-isa_ok($created->starts_at, 'DateTime', 'DateTime returned');
-is("$created_start", '2006-06-18T00:00:00', 'Correct date/time');
-
-## timestamp field
-isa_ok($event->created_on, 'DateTime', 'DateTime returned');
-
-## varchar fields
-isa_ok($event->varchar_date, 'DateTime', 'DateTime returned');
-isa_ok($event->varchar_datetime, 'DateTime', 'DateTime returned');
-
-## skip inflation field
-isnt(ref($event->skip_inflation), 'DateTime', 'No DateTime returned for skip inflation column');
-
-# klunky, but makes older Test::More installs happy
-my $createo = $event->created_on;
-is("$createo", '2006-06-22T21:00:05', 'Correct date/time');
-
-my $created_cron = $created->created_on;
-
-isa_ok($created->created_on, 'DateTime', 'DateTime returned');
-is("$created_cron", '2006-06-23T00:00:00', 'Correct date/time');
-
-
-# Test "timezone" parameter
-
-foreach my $tbl (qw/EventTZ EventTZDeprecated/) {
- my $event_tz = $schema->resultset($tbl)->create({
- starts_at => DateTime->new(year=>2007, month=>12, day=>31, time_zone => "America/Chicago" ),
- created_on => DateTime->new(year=>2006, month=>1, day=>31,
- hour => 13, minute => 34, second => 56, time_zone => "America/New_York" ),
- });
-
- is ($event_tz->starts_at->day_name, "Montag", 'Locale de_DE loaded: day_name');
- is ($event_tz->starts_at->month_name, "Dezember", 'Locale de_DE loaded: month_name');
- is ($event_tz->created_on->day_name, "Tuesday", 'Default locale loaded: day_name');
- is ($event_tz->created_on->month_name, "January", 'Default locale loaded: month_name');
-
- my $starts_at = $event_tz->starts_at;
- is("$starts_at", '2007-12-31T00:00:00', 'Correct date/time using timezone');
-
- my $created_on = $event_tz->created_on;
- is("$created_on", '2006-01-31T12:34:56', 'Correct timestamp using timezone');
- is($event_tz->created_on->time_zone->name, "America/Chicago", "Correct timezone");
-
- my $loaded_event = $schema->resultset($tbl)->find( $event_tz->id );
-
- isa_ok($loaded_event->starts_at, 'DateTime', 'DateTime returned');
- $starts_at = $loaded_event->starts_at;
- is("$starts_at", '2007-12-31T00:00:00', 'Loaded correct date/time using timezone');
- is($starts_at->time_zone->name, 'America/Chicago', 'Correct timezone');
-
- isa_ok($loaded_event->created_on, 'DateTime', 'DateTime returned');
- $created_on = $loaded_event->created_on;
- is("$created_on", '2006-01-31T12:34:56', 'Loaded correct timestamp using timezone');
- is($created_on->time_zone->name, 'America/Chicago', 'Correct timezone');
-
- # Test floating timezone warning
- # We expect one warning
- SKIP: {
- skip "ENV{DBIC_FLOATING_TZ_OK} was set, skipping", 1 if $ENV{DBIC_FLOATING_TZ_OK};
- local $SIG{__WARN__} = sub {
- like(
- shift,
- qr/You're using a floating timezone, please see the documentation of DBIx::Class::InflateColumn::DateTime for an explanation/,
- 'Floating timezone warning'
- );
- };
- my $event_tz_floating = $schema->resultset($tbl)->create({
- starts_at => DateTime->new(year=>2007, month=>12, day=>31, ),
- created_on => DateTime->new(year=>2006, month=>1, day=>31,
- hour => 13, minute => 34, second => 56, ),
- });
- delete $SIG{__WARN__};
- };
-
- # This should fail to set
- my $prev_str = "$created_on";
- $loaded_event->update({ created_on => '0000-00-00' });
- is("$created_on", $prev_str, "Don't update invalid dates");
-
- my $invalid = $schema->resultset('Event')->create({
- starts_at => '0000-00-00',
- created_on => $created_on
- });
-
- is( $invalid->get_column('starts_at'), '0000-00-00', "Invalid date stored" );
- is( $invalid->starts_at, undef, "Inflate to undef" );
-
- $invalid->created_on('0000-00-00');
- $invalid->update;
-
- {
- local $@;
- eval { $invalid->created_on };
- like( $@, qr/invalid date format/i, "Invalid date format exception");
- }
-}
-
-## varchar field using inflate_date => 1
-my $varchar_date = $event->varchar_date;
-is("$varchar_date", '2006-07-23T00:00:00', 'Correct date/time');
-
-## varchar field using inflate_datetime => 1
-my $varchar_datetime = $event->varchar_datetime;
-is("$varchar_datetime", '2006-05-22T19:05:07', 'Correct date/time');
-
-## skip inflation field
-my $skip_inflation = $event->skip_inflation;
-is ("$skip_inflation", '2006-04-21 18:04:06', 'Correct date/time');
-
-} # Skip if no MySQL DT::Formatter
-
-SKIP: {
- eval { require DateTime::Format::Pg };
- skip ('Need DateTime::Format::Pg for timestamp inflation tests', 3) if $@;
-
- my $event = $schema->resultset("EventTZPg")->find(1);
- $event->update({created_on => '2009-01-15 17:00:00+00'});
- $event->discard_changes;
- isa_ok($event->created_on, "DateTime") or diag $event->created_on;
- is($event->created_on->time_zone->name, "America/Chicago", "Timezone changed");
- # Time zone difference -> -6hours
- is($event->created_on->iso8601, "2009-01-15T11:00:00", "Time with TZ correct");
-}
Deleted: DBIx-Class/0.08/branches/mc_fixes/t/96file_column.t
===================================================================
--- DBIx-Class/0.08/branches/mc_fixes/t/96file_column.t 2009-06-11 14:54:09 UTC (rev 6623)
+++ DBIx-Class/0.08/branches/mc_fixes/t/96file_column.t 2009-06-11 14:54:49 UTC (rev 6624)
@@ -1,85 +0,0 @@
-use strict;
-use warnings;
-
-use Test::More;
-use lib qw(t/lib);
-use DBICTest;
-use IO::File;
-use File::Compare;
-use Path::Class qw/file/;
-
-my $schema = DBICTest->init_schema();
-
-plan tests => 10;
-
-my $rs = $schema->resultset('FileColumn');
-my $fname = '96file_column.t';
-my $source_file = file('t', $fname);
-my $fh = $source_file->open('r') or die "failed to open $source_file: $!\n";
-my $fc = eval {
- $rs->create({ file => { handle => $fh, filename => $fname } })
-};
-is ( $@, '', 'created' );
-
-$fh->close;
-
-my $storage = file(
- $fc->column_info('file')->{file_column_path},
- $fc->id,
- $fc->file->{filename},
-);
-ok ( -e $storage, 'storage exists' );
-
-# read it back
-$fc = $rs->find({ id => $fc->id });
-
-is ( $fc->file->{filename}, $fname, 'filename matches' );
-ok ( compare($storage, $source_file) == 0, 'file contents matches' );
-
-# update
-my $new_fname = 'File.pm';
-my $new_source_file = file(qw/lib DBIx Class InflateColumn File.pm/);
-my $new_storage = file(
- $fc->column_info('file')->{file_column_path},
- $fc->id,
- $new_fname,
-);
-$fh = $new_source_file->open('r') or die "failed to open $new_source_file: $!\n";
-
-$fc->file({ handle => $fh, filename => $new_fname });
-$fc->update;
-
-TODO: {
- local $TODO = 'design change required';
- ok ( ! -e $storage, 'old storage does not exist' );
-};
-
-ok ( -e $new_storage, 'new storage exists' );
-
-# read it back
-$fc = $rs->find({ id => $fc->id });
-
-is ( $fc->file->{filename}, $new_fname, 'new filname matches' );
-ok ( compare($new_storage, $new_source_file) == 0, 'new content matches' );
-
-$fc->delete;
-
-ok ( ! -e $storage, 'storage deleted' );
-
-$fh = $source_file->openr or die "failed to open $source_file: $!\n";
-$fc = $rs->create({ file => { handle => $fh, filename => $fname } });
-
-# read it back
-$fc->discard_changes;
-
-$storage = file(
- $fc->column_info('file')->{file_column_path},
- $fc->id,
- $fc->file->{filename},
-);
-
-TODO: {
- local $TODO = 'need resultset delete override to delete_all';
- $rs->delete;
- ok ( ! -e $storage, 'storage does not exist after $rs->delete' );
-};
Modified: DBIx-Class/0.08/branches/mc_fixes/t/bind/bindtype_columns.t
===================================================================
--- DBIx-Class/0.08/branches/mc_fixes/t/bind/bindtype_columns.t 2009-06-11 14:54:09 UTC (rev 6623)
+++ DBIx-Class/0.08/branches/mc_fixes/t/bind/bindtype_columns.t 2009-06-11 14:54:49 UTC (rev 6624)
@@ -49,10 +49,7 @@
is($row->get_column('bytea'), $big_long_string, "Created the blob correctly.");
}
-TODO: {
- local $TODO =
- 'Passing bind attributes to $sth->bind_param() should be implemented (it only works in $storage->insert ATM)';
-
+{
my $rs = $schema->resultset('BindType')->search({ bytea => $big_long_string });
# search on the bytea column (select)
Copied: DBIx-Class/0.08/branches/mc_fixes/t/inflate/core.t (from rev 6534, DBIx-Class/0.08/branches/mc_fixes/t/68inflate.t)
===================================================================
--- DBIx-Class/0.08/branches/mc_fixes/t/inflate/core.t (rev 0)
+++ DBIx-Class/0.08/branches/mc_fixes/t/inflate/core.t 2009-06-11 14:54:49 UTC (rev 6624)
@@ -0,0 +1,113 @@
+use strict;
+use warnings;
+
+use Test::More;
+use lib qw(t/lib);
+use DBICTest;
+
+my $schema = DBICTest->init_schema();
+
+eval { require DateTime };
+plan skip_all => "Need DateTime for inflation tests" if $@;
+
+plan tests => 22;
+
+$schema->class('CD') ->inflate_column( 'year',
+ { inflate => sub { DateTime->new( year => shift ) },
+ deflate => sub { shift->year } }
+);
+
+# inflation test
+my $cd = $schema->resultset("CD")->find(3);
+
+is( ref($cd->year), 'DateTime', 'year is a DateTime, ok' );
+
+is( $cd->year->year, 1997, 'inflated year ok' );
+
+is( $cd->year->month, 1, 'inflated month ok' );
+
+eval { $cd->year(\'year +1'); };
+ok(!$@, 'updated year using a scalarref');
+$cd->update();
+$cd->discard_changes();
+
+is( ref($cd->year), 'DateTime', 'year is still a DateTime, ok' );
+
+is( $cd->year->year, 1998, 'updated year, bypassing inflation' );
+
+is( $cd->year->month, 1, 'month is still 1' );
+
+# get_inflated_column test
+
+is( ref($cd->get_inflated_column('year')), 'DateTime', 'get_inflated_column produces a DateTime');
+
+# deflate test
+my $now = DateTime->now;
+$cd->year( $now );
+$cd->update;
+
+$cd = $schema->resultset("CD")->find(3);
+is( $cd->year->year, $now->year, 'deflate ok' );
+
+# set_inflated_column test
+eval { $cd->set_inflated_column('year', $now) };
+ok(!$@, 'set_inflated_column with DateTime object');
+$cd->update;
+
+$cd = $schema->resultset("CD")->find(3);
+is( $cd->year->year, $now->year, 'deflate ok' );
+
+$cd = $schema->resultset("CD")->find(3);
+my $before_year = $cd->year->year;
+eval { $cd->set_inflated_column('year', \'year + 1') };
+ok(!$@, 'set_inflated_column to "year + 1"');
+$cd->update;
+
+TODO: {
+ local $TODO = 'this was left in without a TODO - should it work?';
+
+ eval {
+ $cd->store_inflated_column('year', \'year + 1');
+ is_deeply( $cd->year, \'year + 1', 'deflate ok' );
+ };
+ ok(!$@, 'store_inflated_column to "year + 1"');
+}
+
+$cd = $schema->resultset("CD")->find(3);
+is( $cd->year->year, $before_year+1, 'deflate ok' );
+
+# store_inflated_column test
+$cd = $schema->resultset("CD")->find(3);
+eval { $cd->store_inflated_column('year', $now) };
+ok(!$@, 'store_inflated_column with DateTime object');
+$cd->update;
+
+is( $cd->year->year, $now->year, 'deflate ok' );
+
+# update tests
+$cd = $schema->resultset("CD")->find(3);
+eval { $cd->update({'year' => $now}) };
+ok(!$@, 'update using DateTime object ok');
+is($cd->year->year, $now->year, 'deflate ok');
+
+$cd = $schema->resultset("CD")->find(3);
+$before_year = $cd->year->year;
+eval { $cd->update({'year' => \'year + 1'}) };
+ok(!$@, 'update using scalarref ok');
+
+$cd = $schema->resultset("CD")->find(3);
+is($cd->year->year, $before_year + 1, 'deflate ok');
+
+# discard_changes test
+$cd = $schema->resultset("CD")->find(3);
+# inflate the year
+$before_year = $cd->year->year;
+$cd->update({ year => \'year + 1'});
+$cd->discard_changes;
+
+is($cd->year->year, $before_year + 1, 'discard_changes clears the inflated value');
+
+my $copy = $cd->copy({ year => $now, title => "zemoose" });
+
+isnt( $copy->year->year, $before_year, "copy" );
+
Copied: DBIx-Class/0.08/branches/mc_fixes/t/inflate/datetime.t (from rev 6534, DBIx-Class/0.08/branches/mc_fixes/t/89inflate_datetime.t)
===================================================================
--- DBIx-Class/0.08/branches/mc_fixes/t/inflate/datetime.t (rev 0)
+++ DBIx-Class/0.08/branches/mc_fixes/t/inflate/datetime.t 2009-06-11 14:54:49 UTC (rev 6624)
@@ -0,0 +1,76 @@
+use strict;
+use warnings;
+
+use Test::More;
+use lib qw(t/lib);
+use DBICTest;
+
+my $schema = DBICTest->init_schema();
+
+eval { require DateTime::Format::SQLite };
+plan $@
+ ? ( skip_all => "Need DateTime::Format::SQLite for DT inflation tests" )
+ : ( tests => 18 )
+;
+
+# inflation test
+my $event = $schema->resultset("Event")->find(1);
+
+isa_ok($event->starts_at, 'DateTime', 'DateTime returned');
+
+# klunky, but makes older Test::More installs happy
+my $starts = $event->starts_at;
+is("$starts", '2006-04-25T22:24:33', 'Correct date/time');
+
+TODO: {
+ local $TODO = "We can't do this yet before 0.09" if DBIx::Class->VERSION < 0.09;
+
+ ok(my $row =
+ $schema->resultset('Event')->search({ starts_at => $starts })->single);
+ is(eval { $row->id }, 1, 'DT in search');
+
+ ok($row =
+ $schema->resultset('Event')->search({ starts_at => { '>=' => $starts } })->single);
+ is(eval { $row->id }, 1, 'DT in search with condition');
+}
+
+# create using DateTime
+my $created = $schema->resultset('Event')->create({
+ starts_at => DateTime->new(year=>2006, month=>6, day=>18),
+ created_on => DateTime->new(year=>2006, month=>6, day=>23)
+});
+my $created_start = $created->starts_at;
+
+isa_ok($created->starts_at, 'DateTime', 'DateTime returned');
+is("$created_start", '2006-06-18T00:00:00', 'Correct date/time');
+
+## timestamp field
+isa_ok($event->created_on, 'DateTime', 'DateTime returned');
+
+## varchar fields
+isa_ok($event->varchar_date, 'DateTime', 'DateTime returned');
+isa_ok($event->varchar_datetime, 'DateTime', 'DateTime returned');
+
+## skip inflation field
+isnt(ref($event->skip_inflation), 'DateTime', 'No DateTime returned for skip inflation column');
+
+# klunky, but makes older Test::More installs happy
+my $createo = $event->created_on;
+is("$createo", '2006-06-22T21:00:05', 'Correct date/time');
+
+my $created_cron = $created->created_on;
+
+isa_ok($created->created_on, 'DateTime', 'DateTime returned');
+is("$created_cron", '2006-06-23T00:00:00', 'Correct date/time');
+
+## varchar field using inflate_date => 1
+my $varchar_date = $event->varchar_date;
+is("$varchar_date", '2006-07-23T00:00:00', 'Correct date/time');
+
+## varchar field using inflate_datetime => 1
+my $varchar_datetime = $event->varchar_datetime;
+is("$varchar_datetime", '2006-05-22T19:05:07', 'Correct date/time');
+
+## skip inflation field
+my $skip_inflation = $event->skip_inflation;
+is ("$skip_inflation", '2006-04-21 18:04:06', 'Correct date/time');
Copied: DBIx-Class/0.08/branches/mc_fixes/t/inflate/datetime_mysql.t (from rev 6534, DBIx-Class/0.08/branches/mc_fixes/t/89inflate_datetime.t)
===================================================================
--- DBIx-Class/0.08/branches/mc_fixes/t/inflate/datetime_mysql.t (rev 0)
+++ DBIx-Class/0.08/branches/mc_fixes/t/inflate/datetime_mysql.t 2009-06-11 14:54:49 UTC (rev 6624)
@@ -0,0 +1,97 @@
+use strict;
+use warnings;
+
+use Test::More;
+use Test::Exception;
+use lib qw(t/lib);
+use DBICTest;
+use DBICTest::Schema;
+
+{
+ local $SIG{__WARN__} = sub { warn @_ if $_[0] !~ /extra \=\> .+? has been deprecated/ };
+ DBICTest::Schema->load_classes('EventTZ');
+ DBICTest::Schema->load_classes('EventTZDeprecated');
+}
+
+eval { require DateTime::Format::MySQL };
+plan $@
+ ? ( skip_all => "Need DateTime::Format::MySQL for inflation tests")
+ : ( tests => 33 )
+;
+
+my $schema = DBICTest->init_schema();
+
+# Test "timezone" parameter
+foreach my $tbl (qw/EventTZ EventTZDeprecated/) {
+ my $event_tz = $schema->resultset($tbl)->create({
+ starts_at => DateTime->new(year=>2007, month=>12, day=>31, time_zone => "America/Chicago" ),
+ created_on => DateTime->new(year=>2006, month=>1, day=>31,
+ hour => 13, minute => 34, second => 56, time_zone => "America/New_York" ),
+ });
+
+ is ($event_tz->starts_at->day_name, "Montag", 'Locale de_DE loaded: day_name');
+ is ($event_tz->starts_at->month_name, "Dezember", 'Locale de_DE loaded: month_name');
+ is ($event_tz->created_on->day_name, "Tuesday", 'Default locale loaded: day_name');
+ is ($event_tz->created_on->month_name, "January", 'Default locale loaded: month_name');
+
+ my $starts_at = $event_tz->starts_at;
+ is("$starts_at", '2007-12-31T00:00:00', 'Correct date/time using timezone');
+
+ my $created_on = $event_tz->created_on;
+ is("$created_on", '2006-01-31T12:34:56', 'Correct timestamp using timezone');
+ is($event_tz->created_on->time_zone->name, "America/Chicago", "Correct timezone");
+
+ my $loaded_event = $schema->resultset($tbl)->find( $event_tz->id );
+
+ isa_ok($loaded_event->starts_at, 'DateTime', 'DateTime returned');
+ $starts_at = $loaded_event->starts_at;
+ is("$starts_at", '2007-12-31T00:00:00', 'Loaded correct date/time using timezone');
+ is($starts_at->time_zone->name, 'America/Chicago', 'Correct timezone');
+
+ isa_ok($loaded_event->created_on, 'DateTime', 'DateTime returned');
+ $created_on = $loaded_event->created_on;
+ is("$created_on", '2006-01-31T12:34:56', 'Loaded correct timestamp using timezone');
+ is($created_on->time_zone->name, 'America/Chicago', 'Correct timezone');
+
+ # Test floating timezone warning
+ # We expect one warning
+ SKIP: {
+ skip "ENV{DBIC_FLOATING_TZ_OK} was set, skipping", 1 if $ENV{DBIC_FLOATING_TZ_OK};
+ local $SIG{__WARN__} = sub {
+ like(
+ shift,
+ qr/You're using a floating timezone, please see the documentation of DBIx::Class::InflateColumn::DateTime for an explanation/,
+ 'Floating timezone warning'
+ );
+ };
+ my $event_tz_floating = $schema->resultset($tbl)->create({
+ starts_at => DateTime->new(year=>2007, month=>12, day=>31, ),
+ created_on => DateTime->new(year=>2006, month=>1, day=>31,
+ hour => 13, minute => 34, second => 56, ),
+ });
+ delete $SIG{__WARN__};
+ };
+
+ # This should fail to set
+ my $prev_str = "$created_on";
+ $loaded_event->update({ created_on => '0000-00-00' });
+ is("$created_on", $prev_str, "Don't update invalid dates");
+}
+
+# Test invalid DT
+my $invalid = $schema->resultset('EventTZ')->create({
+ starts_at => '0000-00-00',
+ created_on => DateTime->now,
+});
+
+is( $invalid->get_column('starts_at'), '0000-00-00', "Invalid date stored" );
+is( $invalid->starts_at, undef, "Inflate to undef" );
+
+$invalid->created_on('0000-00-00');
+$invalid->update;
+
+throws_ok (
+ sub { $invalid->created_on },
+ qr/invalid date format/i,
+ "Invalid date format exception"
+);
Copied: DBIx-Class/0.08/branches/mc_fixes/t/inflate/datetime_pg.t (from rev 6534, DBIx-Class/0.08/branches/mc_fixes/t/89inflate_datetime.t)
===================================================================
--- DBIx-Class/0.08/branches/mc_fixes/t/inflate/datetime_pg.t (rev 0)
+++ DBIx-Class/0.08/branches/mc_fixes/t/inflate/datetime_pg.t 2009-06-11 14:54:49 UTC (rev 6624)
@@ -0,0 +1,30 @@
+use strict;
+use warnings;
+
+use Test::More;
+use lib qw(t/lib);
+use DBICTest;
+
+{
+ local $SIG{__WARN__} = sub { warn @_ if $_[0] !~ /extra \=\> .+? has been deprecated/ };
+ DBICTest::Schema->load_classes('EventTZPg');
+}
+
+eval { require DateTime::Format::Pg };
+plan $@
+ ? ( skip_all => 'Need DateTime::Format::Pg for timestamp inflation tests')
+ : ( tests => 3 )
+;
+
+
+my $schema = DBICTest->init_schema();
+
+{
+ my $event = $schema->resultset("EventTZPg")->find(1);
+ $event->update({created_on => '2009-01-15 17:00:00+00'});
+ $event->discard_changes;
+ isa_ok($event->created_on, "DateTime") or diag $event->created_on;
+ is($event->created_on->time_zone->name, "America/Chicago", "Timezone changed");
+ # Time zone difference -> -6hours
+ is($event->created_on->iso8601, "2009-01-15T11:00:00", "Time with TZ correct");
+}
Copied: DBIx-Class/0.08/branches/mc_fixes/t/inflate/file_column.t (from rev 6534, DBIx-Class/0.08/branches/mc_fixes/t/96file_column.t)
===================================================================
--- DBIx-Class/0.08/branches/mc_fixes/t/inflate/file_column.t (rev 0)
+++ DBIx-Class/0.08/branches/mc_fixes/t/inflate/file_column.t 2009-06-11 14:54:49 UTC (rev 6624)
@@ -0,0 +1,85 @@
+use strict;
+use warnings;
+
+use Test::More;
+use lib qw(t/lib);
+use DBICTest;
+use IO::File;
+use File::Compare;
+use Path::Class qw/file/;
+
+my $schema = DBICTest->init_schema();
+
+plan tests => 10;
+
+my $rs = $schema->resultset('FileColumn');
+my $source_file = file(__FILE__);
+my $fname = $source_file->basename;
+my $fh = $source_file->open('r') or die "failed to open $source_file: $!\n";
+my $fc = eval {
+ $rs->create({ file => { handle => $fh, filename => $fname } })
+};
+is ( $@, '', 'created' );
+
+$fh->close;
+
+my $storage = file(
+ $fc->column_info('file')->{file_column_path},
+ $fc->id,
+ $fc->file->{filename},
+);
+ok ( -e $storage, 'storage exists' );
+
+# read it back
+$fc = $rs->find({ id => $fc->id });
+
+is ( $fc->file->{filename}, $fname, 'filename matches' );
+ok ( compare($storage, $source_file) == 0, 'file contents matches' );
+
+# update
+my $new_fname = 'File.pm';
+my $new_source_file = file(qw/lib DBIx Class InflateColumn File.pm/);
+my $new_storage = file(
+ $fc->column_info('file')->{file_column_path},
+ $fc->id,
+ $new_fname,
+);
+$fh = $new_source_file->open('r') or die "failed to open $new_source_file: $!\n";
+
+$fc->file({ handle => $fh, filename => $new_fname });
+$fc->update;
+
+TODO: {
+ local $TODO = 'design change required';
+ ok ( ! -e $storage, 'old storage does not exist' );
+};
+
+ok ( -e $new_storage, 'new storage exists' );
+
+# read it back
+$fc = $rs->find({ id => $fc->id });
+
+is ( $fc->file->{filename}, $new_fname, 'new filname matches' );
+ok ( compare($new_storage, $new_source_file) == 0, 'new content matches' );
+
+$fc->delete;
+
+ok ( ! -e $storage, 'storage deleted' );
+
+$fh = $source_file->openr or die "failed to open $source_file: $!\n";
+$fc = $rs->create({ file => { handle => $fh, filename => $fname } });
+
+# read it back
+$fc->discard_changes;
+
+$storage = file(
+ $fc->column_info('file')->{file_column_path},
+ $fc->id,
+ $fc->file->{filename},
+);
+
+TODO: {
+ local $TODO = 'need resultset delete override to delete_all';
+ $rs->delete;
+ ok ( ! -e $storage, 'storage does not exist after $rs->delete' );
+};
Copied: DBIx-Class/0.08/branches/mc_fixes/t/inflate/hri.t (from rev 6534, DBIx-Class/0.08/branches/mc_fixes/t/68inflate_resultclass_hashrefinflator.t)
===================================================================
--- DBIx-Class/0.08/branches/mc_fixes/t/inflate/hri.t (rev 0)
+++ DBIx-Class/0.08/branches/mc_fixes/t/inflate/hri.t 2009-06-11 14:54:49 UTC (rev 6624)
@@ -0,0 +1,126 @@
+use strict;
+use warnings;
+
+use Test::More qw(no_plan);
+use lib qw(t/lib);
+use DBICTest;
+my $schema = DBICTest->init_schema();
+
+
+# Under some versions of SQLite if the $rs is left hanging around it will lock
+# So we create a scope here cos I'm lazy
+{
+ my $rs = $schema->resultset('CD');
+
+ # get the defined columns
+ my @dbic_cols = sort $rs->result_source->columns;
+
+ # use the hashref inflator class as result class
+ $rs->result_class('DBIx::Class::ResultClass::HashRefInflator');
+
+ # fetch first record
+ my $datahashref1 = $rs->first;
+
+ my @hashref_cols = sort keys %$datahashref1;
+
+ is_deeply( \@dbic_cols, \@hashref_cols, 'returned columns' );
+}
+
+
+sub check_cols_of {
+ my ($dbic_obj, $datahashref) = @_;
+
+ foreach my $col (keys %$datahashref) {
+ # plain column
+ if (not ref ($datahashref->{$col}) ) {
+ is ($datahashref->{$col}, $dbic_obj->get_column($col), 'same value');
+ }
+ # related table entry (belongs_to)
+ elsif (ref ($datahashref->{$col}) eq 'HASH') {
+ check_cols_of($dbic_obj->$col, $datahashref->{$col});
+ }
+ # multiple related entries (has_many)
+ elsif (ref ($datahashref->{$col}) eq 'ARRAY') {
+ my @dbic_reltable = $dbic_obj->$col;
+ my @hashref_reltable = @{$datahashref->{$col}};
+
+ is (scalar @hashref_reltable, scalar @dbic_reltable, 'number of related entries');
+
+ # for my $index (0..scalar @hashref_reltable) {
+ for my $index (0..scalar @dbic_reltable) {
+ my $dbic_reltable_obj = $dbic_reltable[$index];
+ my $hashref_reltable_entry = $hashref_reltable[$index];
+
+ check_cols_of($dbic_reltable_obj, $hashref_reltable_entry);
+ }
+ }
+ }
+}
+
+# create a cd without tracks for testing empty has_many relationship
+$schema->resultset('CD')->create({ title => 'Silence is golden', artist => 3, year => 2006 });
+
+# order_by to ensure both resultsets have the rows in the same order
+# also check result_class-as-an-attribute syntax
+my $rs_dbic = $schema->resultset('CD')->search(undef,
+ {
+ prefetch => [ qw/ artist tracks / ],
+ order_by => [ 'me.cdid', 'tracks.position' ],
+ }
+);
+my $rs_hashrefinf = $schema->resultset('CD')->search(undef,
+ {
+ prefetch => [ qw/ artist tracks / ],
+ order_by => [ 'me.cdid', 'tracks.position' ],
+ result_class => 'DBIx::Class::ResultClass::HashRefInflator',
+ }
+);
+
+my @dbic = $rs_dbic->all;
+my @hashrefinf = $rs_hashrefinf->all;
+
+for my $index (0 .. $#hashrefinf) {
+ my $dbic_obj = $dbic[$index];
+ my $datahashref = $hashrefinf[$index];
+
+ check_cols_of($dbic_obj, $datahashref);
+}
+
+# sometimes for ultra-mega-speed you want to fetch columns in esoteric ways
+# check the inflator over a non-fetching join
+$rs_dbic = $schema->resultset ('Artist')->search ({ 'me.artistid' => 1}, {
+ prefetch => { cds => 'tracks' },
+ order_by => [qw/cds.cdid tracks.trackid/],
+});
+
+$rs_hashrefinf = $schema->resultset ('Artist')->search ({ 'me.artistid' => 1}, {
+ join => { cds => 'tracks' },
+ select => [qw/name tracks.title tracks.cd /],
+ as => [qw/name cds.tracks.title cds.tracks.cd /],
+ order_by => [qw/cds.cdid tracks.trackid/],
+ result_class => 'DBIx::Class::ResultClass::HashRefInflator',
+});
+
+ at dbic = map { $_->tracks->all } ($rs_dbic->first->cds->all);
+ at hashrefinf = $rs_hashrefinf->all;
+
+is (scalar @dbic, scalar @hashrefinf, 'Equal number of tracks fetched');
+
+for my $index (0 .. $#hashrefinf) {
+ my $track = $dbic[$index];
+ my $datahashref = $hashrefinf[$index];
+
+ is ($track->cd->artist->name, $datahashref->{name}, 'Brought back correct artist');
+ for my $col (keys %{$datahashref->{cds}{tracks}}) {
+ is ($track->get_column ($col), $datahashref->{cds}{tracks}{$col}, "Correct track '$col'");
+ }
+}
+
+# check for same query as above but using extended columns syntax
+$rs_hashrefinf = $schema->resultset ('Artist')->search ({ 'me.artistid' => 1}, {
+ join => { cds => 'tracks' },
+ columns => {name => 'name', 'cds.tracks.title' => 'tracks.title', 'cds.tracks.cd' => 'tracks.cd'},
+ order_by => [qw/cds.cdid tracks.trackid/],
+});
+$rs_hashrefinf->result_class('DBIx::Class::ResultClass::HashRefInflator');
+is_deeply [$rs_hashrefinf->all], \@hashrefinf, 'Check query using extended columns syntax';
Copied: DBIx-Class/0.08/branches/mc_fixes/t/inflate/serialize.t (from rev 6534, DBIx-Class/0.08/branches/mc_fixes/t/68inflate_serialize.t)
===================================================================
--- DBIx-Class/0.08/branches/mc_fixes/t/inflate/serialize.t (rev 0)
+++ DBIx-Class/0.08/branches/mc_fixes/t/inflate/serialize.t 2009-06-11 14:54:49 UTC (rev 6624)
@@ -0,0 +1,86 @@
+use strict;
+use warnings;
+
+use Test::More;
+use lib qw(t/lib);
+use DBICTest;
+
+my $schema = DBICTest->init_schema();
+
+use Data::Dumper;
+
+my @serializers = (
+ { module => 'YAML.pm',
+ inflater => sub { YAML::Load (shift) },
+ deflater => sub { die "Expecting a reference" unless (ref $_[0]); YAML::Dump (shift) },
+ },
+ { module => 'Storable.pm',
+ inflater => sub { Storable::thaw (shift) },
+ deflater => sub { die "Expecting a reference" unless (ref $_[0]); Storable::nfreeze (shift) },
+ },
+);
+
+
+my $selected;
+foreach my $serializer (@serializers) {
+ eval { require $serializer->{module} };
+ unless ($@) {
+ $selected = $serializer;
+ last;
+ }
+}
+
+plan (skip_all => "No suitable serializer found") unless $selected;
+
+plan (tests => 8);
+DBICTest::Schema::Serialized->inflate_column( 'serialized',
+ { inflate => $selected->{inflater},
+ deflate => $selected->{deflater},
+ },
+);
+Class::C3->reinitialize;
+
+my $struct_hash = {
+ a => 1,
+ b => [
+ { c => 2 },
+ ],
+ d => 3,
+};
+
+my $struct_array = [
+ 'a',
+ {
+ b => 1,
+ c => 2
+ },
+ 'd',
+];
+
+my $rs = $schema->resultset('Serialized');
+my $inflated;
+
+#======= testing hashref serialization
+
+my $object = $rs->create( {
+ id => 1,
+ serialized => '',
+} );
+ok($object->update( { serialized => $struct_hash } ), 'hashref deflation');
+ok($inflated = $object->serialized, 'hashref inflation');
+is_deeply($inflated, $struct_hash, 'inflated hash matches original');
+
+$object = $rs->create( {
+ id => 2,
+ serialized => '',
+} );
+eval { $object->set_inflated_column('serialized', $struct_hash) };
+ok(!$@, 'set_inflated_column to a hashref');
+is_deeply($object->serialized, $struct_hash, 'inflated hash matches original');
+
+
+#====== testing arrayref serialization
+
+ok($object->update( { serialized => $struct_array } ), 'arrayref deflation');
+ok($inflated = $object->serialized, 'arrayref inflation');
+is_deeply($inflated, $struct_array, 'inflated array matches original');
Added: DBIx-Class/0.08/branches/mc_fixes/t/lib/DBICNSTest/Bogus/B.pm
===================================================================
--- DBIx-Class/0.08/branches/mc_fixes/t/lib/DBICNSTest/Bogus/B.pm (rev 0)
+++ DBIx-Class/0.08/branches/mc_fixes/t/lib/DBICNSTest/Bogus/B.pm 2009-06-11 14:54:49 UTC (rev 6624)
@@ -0,0 +1,6 @@
+package DBICNSTest::Result::B;
+use base qw/DBIx::Class/;
+__PACKAGE__->load_components(qw/PK::Auto Core/);
+__PACKAGE__->table('b');
+__PACKAGE__->add_columns('b');
+1;
Modified: DBIx-Class/0.08/branches/mc_fixes/t/lib/DBICTest/AuthorCheck.pm
===================================================================
--- DBIx-Class/0.08/branches/mc_fixes/t/lib/DBICTest/AuthorCheck.pm 2009-06-11 14:54:09 UTC (rev 6623)
+++ DBIx-Class/0.08/branches/mc_fixes/t/lib/DBICTest/AuthorCheck.pm 2009-06-11 14:54:49 UTC (rev 6624)
@@ -84,7 +84,7 @@
sub _find_co_root {
my @mod_parts = split /::/, (__PACKAGE__ . '.pm');
- my $rel_path = file (@mod_parts);
+ my $rel_path = join ('/', @mod_parts); # %INC stores paths with / regardless of OS
return undef unless ($INC{$rel_path});
@@ -93,7 +93,7 @@
# - do 'cd ..' as many times as necessary to get to t/lib/../..
my $root = dir ($INC{$rel_path});
- for (0 .. @mod_parts + 1) {
+ for (1 .. @mod_parts + 2) {
$root = $root->parent;
}
Modified: DBIx-Class/0.08/branches/mc_fixes/t/lib/DBICTest/Schema/Artist.pm
===================================================================
--- DBIx-Class/0.08/branches/mc_fixes/t/lib/DBICTest/Schema/Artist.pm 2009-06-11 14:54:09 UTC (rev 6623)
+++ DBIx-Class/0.08/branches/mc_fixes/t/lib/DBICTest/Schema/Artist.pm 2009-06-11 14:54:49 UTC (rev 6624)
@@ -54,9 +54,9 @@
);
__PACKAGE__->has_many(
- artist_to_artwork => 'DBICTest::Schema::Artwork_to_Artist' => 'artist_id'
+ artwork_to_artist => 'DBICTest::Schema::Artwork_to_Artist' => 'artist_id'
);
-__PACKAGE__->many_to_many('artworks', 'artist_to_artwork', 'artwork');
+__PACKAGE__->many_to_many('artworks', 'artwork_to_artist', 'artwork');
sub sqlt_deploy_hook {
Modified: DBIx-Class/0.08/branches/mc_fixes/t/lib/DBICTest/Schema/Bookmark.pm
===================================================================
--- DBIx-Class/0.08/branches/mc_fixes/t/lib/DBICTest/Schema/Bookmark.pm 2009-06-11 14:54:09 UTC (rev 6623)
+++ DBIx-Class/0.08/branches/mc_fixes/t/lib/DBICTest/Schema/Bookmark.pm 2009-06-11 14:54:49 UTC (rev 6624)
@@ -19,6 +19,6 @@
);
__PACKAGE__->set_primary_key('id');
-__PACKAGE__->belongs_to(link => 'DBICTest::Schema::Link' );
+__PACKAGE__->belongs_to(link => 'DBICTest::Schema::Link', 'link', { on_delete => 'SET NULL' } );
1;
Modified: DBIx-Class/0.08/branches/mc_fixes/t/lib/DBICTest/Schema/Event.pm
===================================================================
--- DBIx-Class/0.08/branches/mc_fixes/t/lib/DBICTest/Schema/Event.pm 2009-06-11 14:54:09 UTC (rev 6623)
+++ DBIx-Class/0.08/branches/mc_fixes/t/lib/DBICTest/Schema/Event.pm 2009-06-11 14:54:49 UTC (rev 6624)
@@ -10,7 +10,7 @@
__PACKAGE__->add_columns(
id => { data_type => 'integer', is_auto_increment => 1 },
- starts_at => { data_type => 'datetime', datetime_undef_if_invalid => 1 },
+ starts_at => { data_type => 'datetime' },
created_on => { data_type => 'timestamp' },
varchar_date => { data_type => 'varchar', inflate_date => 1, size => 20, is_nullable => 1 },
varchar_datetime => { data_type => 'varchar', inflate_datetime => 1, size => 20, is_nullable => 1 },
Modified: DBIx-Class/0.08/branches/mc_fixes/t/lib/DBICTest/Schema/EventTZ.pm
===================================================================
--- DBIx-Class/0.08/branches/mc_fixes/t/lib/DBICTest/Schema/EventTZ.pm 2009-06-11 14:54:09 UTC (rev 6623)
+++ DBIx-Class/0.08/branches/mc_fixes/t/lib/DBICTest/Schema/EventTZ.pm 2009-06-11 14:54:49 UTC (rev 6624)
@@ -10,10 +10,15 @@
__PACKAGE__->add_columns(
id => { data_type => 'integer', is_auto_increment => 1 },
- starts_at => { data_type => 'datetime', timezone => "America/Chicago", locale => 'de_DE' },
+ starts_at => { data_type => 'datetime', timezone => "America/Chicago", locale => 'de_DE', datetime_undef_if_invalid => 1 },
created_on => { data_type => 'timestamp', timezone => "America/Chicago", floating_tz_ok => 1 },
);
__PACKAGE__->set_primary_key('id');
+sub _datetime_parser {
+ require DateTime::Format::MySQL;
+ DateTime::Format::MySQL->new();
+}
+
1;
Modified: DBIx-Class/0.08/branches/mc_fixes/t/lib/DBICTest/Schema/EventTZDeprecated.pm
===================================================================
--- DBIx-Class/0.08/branches/mc_fixes/t/lib/DBICTest/Schema/EventTZDeprecated.pm 2009-06-11 14:54:09 UTC (rev 6623)
+++ DBIx-Class/0.08/branches/mc_fixes/t/lib/DBICTest/Schema/EventTZDeprecated.pm 2009-06-11 14:54:49 UTC (rev 6624)
@@ -16,4 +16,10 @@
__PACKAGE__->set_primary_key('id');
+sub _datetime_parser {
+ require DateTime::Format::MySQL;
+ DateTime::Format::MySQL->new();
+}
+
+
1;
Modified: DBIx-Class/0.08/branches/mc_fixes/t/lib/DBICTest/Schema/Link.pm
===================================================================
--- DBIx-Class/0.08/branches/mc_fixes/t/lib/DBICTest/Schema/Link.pm 2009-06-11 14:54:09 UTC (rev 6623)
+++ DBIx-Class/0.08/branches/mc_fixes/t/lib/DBICTest/Schema/Link.pm 2009-06-11 14:54:49 UTC (rev 6624)
@@ -25,6 +25,8 @@
);
__PACKAGE__->set_primary_key('id');
+__PACKAGE__->has_many ( bookmarks => 'DBICTest::Schema::Bookmark', 'link', { cascade_delete => 0 } );
+
use overload '""' => sub { shift->url }, fallback=> 1;
1;
Added: DBIx-Class/0.08/branches/mc_fixes/t/multi_create/insert_defaults.t
===================================================================
--- DBIx-Class/0.08/branches/mc_fixes/t/multi_create/insert_defaults.t (rev 0)
+++ DBIx-Class/0.08/branches/mc_fixes/t/multi_create/insert_defaults.t 2009-06-11 14:54:49 UTC (rev 6624)
@@ -0,0 +1,45 @@
+use strict;
+use warnings;
+
+use Test::More;
+use Test::Exception;
+use lib qw(t/lib);
+use DBICTest;
+
+plan tests => 8;
+
+my $schema = DBICTest->init_schema();
+
+# Attempt sequential nested find_or_create with autoinc
+# As a side effect re-test nested default create (both the main object and the relation are {})
+my $bookmark_rs = $schema->resultset('Bookmark');
+my $last_bookmark = $bookmark_rs->search ({}, { order_by => { -desc => 'id' }, rows => 1})->single;
+my $last_link = $bookmark_rs->search_related ('link', {}, { order_by => { -desc => 'link.id' }, rows => 1})->single;
+
+# find_or_create a bookmark-link combo with data for a non-existing link
+my $o1 = $bookmark_rs->find_or_create ({ link => { url => 'something-weird' } });
+is ($o1->id, $last_bookmark->id + 1, '1st bookmark ID');
+is ($o1->link->id, $last_link->id + 1, '1st related link ID');
+
+# find_or_create a bookmark-link combo without any data at all (default insert)
+# should extend this test to all available Storage's, and fix them accordingly
+my $o2 = $bookmark_rs->find_or_create ({ link => {} });
+is ($o2->id, $last_bookmark->id + 2, '2nd bookmark ID');
+is ($o2->link->id, $last_link->id + 2, '2nd related link ID');
+
+# make sure the pre-existing link has only one related bookmark
+is ($last_link->bookmarks->count, 1, 'Expecting only 1 bookmark and 1 link, someone mucked with the table!');
+
+# find_or_create a bookmark withouyt any data, but supplying an existing link object
+# should return $last_bookmark
+my $o0 = $bookmark_rs->find_or_create ({ link => $last_link });
+is_deeply ({ $o0->columns}, {$last_bookmark->columns}, 'Correctly identify a row given a relationship');
+
+# inject an additional bookmark and repeat the test
+# should warn and return the first row
+my $o3 = $last_link->create_related ('bookmarks', {});
+is ($o3->id, $last_bookmark->id + 3, '3rd bookmark ID');
+
+local $SIG{__WARN__} = sub { warn @_ unless $_[0] =~ /Query returned more than one row/ };
+my $oX = $bookmark_rs->find_or_create ({ link => $last_link });
+is_deeply ({ $oX->columns}, {$last_bookmark->columns}, 'Correctly identify a row given a relationship');
Deleted: DBIx-Class/0.08/branches/mc_fixes/t/prefetch/pollute_already_joined.t
===================================================================
--- DBIx-Class/0.08/branches/mc_fixes/t/prefetch/pollute_already_joined.t 2009-06-11 14:54:09 UTC (rev 6623)
+++ DBIx-Class/0.08/branches/mc_fixes/t/prefetch/pollute_already_joined.t 2009-06-11 14:54:49 UTC (rev 6624)
@@ -1,63 +0,0 @@
-use strict;
-use warnings;
-
-use Test::More;
-use Test::Exception;
-use lib qw(t/lib);
-use DBICTest;
-use Data::Dumper;
-
-my $schema = DBICTest->init_schema();
-
-my $orig_debug = $schema->storage->debug;
-
-use IO::File;
-
-BEGIN {
- eval "use DBD::SQLite";
- plan $@
- ? ( skip_all => 'needs DBD::SQLite for testing' )
- : ( tests => 10 );
-}
-
-# A search() with prefetch seems to pollute an already joined resultset
-# in a way that offsets future joins (adapted from a test case by Debolaz)
-{
- my ($cd_rs, $attrs);
-
- # test a real-life case - rs is obtained by an implicit m2m join
- $cd_rs = $schema->resultset ('Producer')->first->cds;
- $attrs = Dumper $cd_rs->{attrs};
-
- $cd_rs->search ({})->all;
- is (Dumper ($cd_rs->{attrs}), $attrs, 'Resultset attributes preserved after a simple search');
-
- lives_ok (sub {
- $cd_rs->search ({'artist.artistid' => 1}, { prefetch => 'artist' })->all;
- is (Dumper ($cd_rs->{attrs}), $attrs, 'Resultset attributes preserved after search with prefetch');
- }, 'first prefetching search ok');
-
- lives_ok (sub {
- $cd_rs->search ({'artist.artistid' => 1}, { prefetch => 'artist' })->all;
- is (Dumper ($cd_rs->{attrs}), $attrs, 'Resultset attributes preserved after another search with prefetch')
- }, 'second prefetching search ok');
-
-
- # test a regular rs with an empty seen_join injected - it should still work!
- $cd_rs = $schema->resultset ('CD');
- $cd_rs->{attrs}{seen_join} = {};
- $attrs = Dumper $cd_rs->{attrs};
-
- $cd_rs->search ({})->all;
- is (Dumper ($cd_rs->{attrs}), $attrs, 'Resultset attributes preserved after a simple search');
-
- lives_ok (sub {
- $cd_rs->search ({'artist.artistid' => 1}, { prefetch => 'artist' })->all;
- is (Dumper ($cd_rs->{attrs}), $attrs, 'Resultset attributes preserved after search with prefetch');
- }, 'first prefetching search ok');
-
- lives_ok (sub {
- $cd_rs->search ({'artist.artistid' => 1}, { prefetch => 'artist' })->all;
- is (Dumper ($cd_rs->{attrs}), $attrs, 'Resultset attributes preserved after another search with prefetch')
- }, 'second prefetching search ok');
-}
Modified: DBIx-Class/0.08/branches/mc_fixes/t/prefetch/rows_bug.t
===================================================================
--- DBIx-Class/0.08/branches/mc_fixes/t/prefetch/rows_bug.t 2009-06-11 14:54:09 UTC (rev 6623)
+++ DBIx-Class/0.08/branches/mc_fixes/t/prefetch/rows_bug.t 2009-06-11 14:54:49 UTC (rev 6624)
@@ -7,7 +7,8 @@
use lib qw(t/lib);
use DBICTest;
-plan $@ ? (skip_all => 'needs DBD::SQLite for testing') : (tests => 2);
+plan skip_all => 'fix pending';
+#plan tests => 4;
my $schema = DBICTest->init_schema();
my $no_prefetch = $schema->resultset('Artist')->search(
@@ -23,61 +24,45 @@
}
);
-my $no_prefetch_count = 0;
-my $use_prefetch_count = 0;
-
is($no_prefetch->count, $use_prefetch->count, '$no_prefetch->count == $use_prefetch->count');
+is(
+ scalar ($no_prefetch->all),
+ scalar ($use_prefetch->all),
+ "Amount of returned rows is right"
+);
-TODO: {
- local $TODO = "This is a difficult bug to fix, workaround is not to use prefetch with rows";
- $no_prefetch_count++ while $no_prefetch->next;
- $use_prefetch_count++ while $use_prefetch->next;
- is(
- $no_prefetch_count,
- $use_prefetch_count,
- "manual row count confirms consistency"
- . " (\$no_prefetch_count == $no_prefetch_count, "
- . " \$use_prefetch_count == $use_prefetch_count)"
- );
-}
-__END__
-The fix is to, when using prefetch, take the query and put it into a subquery
-joined to the tables we're prefetching from. This might result in the same
-table being joined once in the main subquery and once in the main query. This
-may actually resolve other, unknown edgecase bugs. It is also the right way
-to do prefetching. Optimizations can come later.
-This means that:
- $foo_rs->search(
- { ... },
- {
- prefetch => 'bar',
- ...
- },
- );
+my $artist_many_cds = $schema->resultset('Artist')->search ( {}, {
+ join => 'cds',
+ group_by => 'me.artistid',
+ having => \ 'count(cds.cdid) > 1',
+})->first;
-becomes:
- my $temp = $foo_rs->search(
- { ... },
- {
- join => 'bar',
- ...
- },
- );
- $foo_rs->storage->schema->resultset('foo')->search(
- undef,
- {
- from => [
- { me => $temp->as_query },
- ],
- prefetch => 'bar',
- },
- );
-Problem:
- * The prefetch->join change needs to happen ONLY IF there are conditions
- that depend on bar being joined.
- * How will this work when the $rs is further searched on? Those clauses
- need to be added to the subquery, not the outer one. This is particularly
- true if rows is added in the attribute later per the Pager.
+$no_prefetch = $schema->resultset('Artist')->search(
+ { artistid => $artist_many_cds->id },
+ { rows => 1 }
+);
+
+$use_prefetch = $schema->resultset('Artist')->search(
+ { artistid => $artist_many_cds->id },
+ {
+ prefetch => 'cds',
+ rows => 1
+ }
+);
+
+my $prefetch_artist = $use_prefetch->first;
+my $normal_artist = $no_prefetch->first;
+
+is(
+ $prefetch_artist->cds->count,
+ $normal_artist->cds->count,
+ "Count of child rel with prefetch + rows => 1 is right"
+);
+is (
+ scalar ($prefetch_artist->cds->all),
+ scalar ($normal_artist->cds->all),
+ "Amount of child rel rows with prefetch + rows => 1 is right"
+);
Copied: DBIx-Class/0.08/branches/mc_fixes/t/search/preserve_original_rs.t (from rev 6534, DBIx-Class/0.08/branches/mc_fixes/t/prefetch/pollute_already_joined.t)
===================================================================
--- DBIx-Class/0.08/branches/mc_fixes/t/search/preserve_original_rs.t (rev 0)
+++ DBIx-Class/0.08/branches/mc_fixes/t/search/preserve_original_rs.t 2009-06-11 14:54:49 UTC (rev 6624)
@@ -0,0 +1,89 @@
+use strict;
+use warnings;
+
+use Test::More;
+use Test::Exception;
+
+use lib qw(t/lib);
+use DBIC::SqlMakerTest;
+use DBIC::DebugObj;
+use DBICTest;
+use Data::Dumper;
+
+my $schema = DBICTest->init_schema();
+
+plan tests => 22;
+
+# A search() with prefetch seems to pollute an already joined resultset
+# in a way that offsets future joins (adapted from a test case by Debolaz)
+{
+ my ($cd_rs, $attrs);
+
+ # test a real-life case - rs is obtained by an implicit m2m join
+ $cd_rs = $schema->resultset ('Producer')->first->cds;
+ $attrs = Dumper $cd_rs->{attrs};
+
+ $cd_rs->search ({})->all;
+ is (Dumper ($cd_rs->{attrs}), $attrs, 'Resultset attributes preserved after a simple search');
+
+ lives_ok (sub {
+ $cd_rs->search ({'artist.artistid' => 1}, { prefetch => 'artist' })->all;
+ is (Dumper ($cd_rs->{attrs}), $attrs, 'Resultset attributes preserved after search with prefetch');
+ }, 'first prefetching search ok');
+
+ lives_ok (sub {
+ $cd_rs->search ({'artist.artistid' => 1}, { prefetch => 'artist' })->all;
+ is (Dumper ($cd_rs->{attrs}), $attrs, 'Resultset attributes preserved after another search with prefetch')
+ }, 'second prefetching search ok');
+
+
+ # test a regular rs with an empty seen_join injected - it should still work!
+ $cd_rs = $schema->resultset ('CD');
+ $cd_rs->{attrs}{seen_join} = {};
+ $attrs = Dumper $cd_rs->{attrs};
+
+ $cd_rs->search ({})->all;
+ is (Dumper ($cd_rs->{attrs}), $attrs, 'Resultset attributes preserved after a simple search');
+
+ lives_ok (sub {
+ $cd_rs->search ({'artist.artistid' => 1}, { prefetch => 'artist' })->all;
+ is (Dumper ($cd_rs->{attrs}), $attrs, 'Resultset attributes preserved after search with prefetch');
+ }, 'first prefetching search ok');
+
+ lives_ok (sub {
+ $cd_rs->search ({'artist.artistid' => 1}, { prefetch => 'artist' })->all;
+ is (Dumper ($cd_rs->{attrs}), $attrs, 'Resultset attributes preserved after another search with prefetch')
+ }, 'second prefetching search ok');
+}
+
+# Also test search_related, but now that we have as_query simply compare before and after
+my $artist = $schema->resultset ('Artist')->first;
+my %q;
+
+$q{a2a}{rs} = $artist->search_related ('artwork_to_artist');
+$q{a2a}{query} = $q{a2a}{rs}->as_query;
+
+$q{artw}{rs} = $q{a2a}{rs}->search_related ('artwork',
+ { },
+ { join => ['cd', 'artwork_to_artist'] },
+);
+$q{artw}{query} = $q{artw}{rs}->as_query;
+
+$q{cd}{rs} = $q{artw}{rs}->search_related ('cd', {}, { join => [ 'artist', 'tracks' ] } );
+$q{cd}{query} = $q{cd}{rs}->as_query;
+
+$q{artw_back}{rs} = $q{cd}{rs}->search_related ('artwork',
+ {}, { join => { artwork_to_artist => 'artist' } }
+)->search_related ('artwork_to_artist', {}, { join => 'artist' });
+$q{artw_back}{query} = $q{artw_back}{rs}->as_query;
+
+for my $s (qw/a2a artw cd artw_back/) {
+ my $rs = $q{$s}{rs};
+
+ lives_ok ( sub { $rs->first }, "first() on $s does not throw an exception" );
+
+ lives_ok ( sub { $rs->count }, "count() on $s does not throw an exception" );
+
+ is_same_sql_bind ($rs->as_query, $q{$s}{query}, "$s resultset unmodified (as_query matches)" );
+}
+
Added: DBIx-Class/0.08/branches/mc_fixes/t/update/type_aware.t
===================================================================
--- DBIx-Class/0.08/branches/mc_fixes/t/update/type_aware.t (rev 0)
+++ DBIx-Class/0.08/branches/mc_fixes/t/update/type_aware.t 2009-06-11 14:54:49 UTC (rev 6624)
@@ -0,0 +1,27 @@
+use strict;
+use warnings;
+
+use Test::More;
+use lib qw(t/lib);
+use DBICTest;
+
+my $schema = DBICTest->init_schema();
+
+plan tests => 4;
+
+my $artist = $schema->resultset ('Artist')->first;
+ok (!$artist->get_dirty_columns, 'Artist is clean' );
+
+$artist->rank (13);
+ok (!$artist->get_dirty_columns, 'Artist is clean after num value update' );
+$artist->discard_changes;
+
+$artist->rank ('13.00');
+ok (!$artist->get_dirty_columns, 'Artist is clean after string value update' );
+$artist->discard_changes;
+
+# override column info
+$artist->result_source->column_info ('rank')->{is_numeric} = 0;
+$artist->rank ('13.00');
+ok ($artist->get_dirty_columns, 'Artist is updated after is_numeric override' );
+$artist->discard_changes;
More information about the Bast-commits
mailing list