[Dbix-class] how to add a method that performs a database lookup?

Ash Berlin ash_cpan at firemirror.com
Tue Dec 16 00:47:14 GMT 2008


On 16 Dec 2008, at 00:31, Toby Corkindale wrote:

> Adam Witney wrote:
>>>>
>>>> I am new to DBIx::Class, but have so far been impressed with how  
>>>> easy it is to use, so thanks to those that have developed and  
>>>> continue to develop it.
>>>>
>>>> I wanted to know, is there a way of adding a method to a class  
>>>> that performs a separate database lookup. I have used the  
>>>> Cookbook example for "Arbitrary SQL through a custom  
>>>> ResultSource", but this returns a query that is plugged into a  
>>>> sub-select. Can i access the database connection handle at all  
>>>> (eg $dbh) to build my own SQL query?
>>>>
>>>> thanks for any help
>>>>
>>>> adam
>>>>
>>>>
>>>
>>> You can, but before I tell you how I'm going to ask what kind of  
>>> query you want. Give us SQL and we'll tell you if you can do it  
>>> without going via the raw DBH.
>> ok i have several places i think I might need this!
>> I have two tables containing data that originated from files; one  
>> table contains the filename, format etc and the other table  
>> contains the data itself. The data table contains about 100 million  
>> rows now and the data needs only be retrieved in chunks (about  
>> 5k-10k rows) corresponding to the whole file (one row of first  
>> table), although the required columns may very between queries. I  
>> was thinking that creating a class and hence an object per row  
>> would make this slower, so i thought i could write a method  that  
>> pulls back the whole data in one go for the main class/object.
>
> Hmm, just out of interest, did you consider using the pg blob format  
> for the file data instead? (Which lets you work on it using read/ 
> write/seek semantics)
> Or has that fallen out of fashion these days?
>
> If you're working with frames of a movie, or rows of a CSV file, or  
> other such record-based data then I totally understand you would  
> want to store them all in individual rows instead though.
>
> If I understand your query, you want to do something like:
> SELECT filedata.foo, filedata.bar
> FROM filedata JOIN metadata ON filedata.metadata = metadata.id
> WHERE metadata.filename = 'myFile.bin'
> However you expect that to return 7k5 (+/- 2k5) rows and you'd like  
> to retrieve it in chunks?
>
>
> You could do it with pages, like this?
> while(
>  my @items = $schema->resultset('Filedata')->search(
>    {
>      'metadata.filename' => $filename,
>    },
>    {
>      join => 'metadata',
>      rows => 1000,
>      page => $page++
>    }
>  )
> ) {
>    do_stuff_with(@items)
> }
>
> But you're correct that it would be creating an object for every row.
> However DBIC should be pulling down all 1000 of them from the DB in  
> a single query.
>

Enter HashRefInflator, designed by chance for this exact purpose:

http://search.cpan.org/~ash/DBIx-Class-0.08010/lib/DBIx/Class/Manual/Cookbook.pod#Skip_object_creation_for_faster_results



More information about the DBIx-Class mailing list