[Catalyst] [OT] what would constitute a sensible set of benchmarks?

Daniel McBrearty danielmcbrearty at gmail.com
Mon Jan 15 10:49:23 GMT 2007


completely academic at the moment, but it would be interesting to see
the benchmark comparison thing done properly. If it were, the way
would be to specify a set of application functions, let people within
the various projects implement them as they wish, then benchmark. I
suppose ...

so what would be a decent set of tests? I'll have a stab ...


1. no db, no templating. Just have the app respond to a uri for a
random number n, and respond with the random number in a plain text
doc.

so /text_string/abcde would expect to get back the string "abcde" in a text doc

this could measure the ability of the app to parse the uri, and process it.

2. same with templating. Now we could expect the string back in a
simple html template ... although that doesn't expect the template
system to do much work ... /html_string/xyz ...

3. db access, no templating. The db type, config, schema and dataset
should be spec'd as part of the tests, to factor this out as far as
possible. Then we could have several tests:
 - just retrieve a row and display results /db_retrieve
 - same with one or more joins required /db_join
 - write/update a row /db_write

4. a random mix of all the above.

Could use siege to actually do the tests. Of course, we might just end
up proving that the db makes more difference than anything else ...

This is just mindblobs at the moment, but the other thread made me
think, and I wondered if something like this has been done already.
Would be interesting

D

-- 
Daniel McBrearty
email : danielmcbrearty at gmail.com
www.engoi.com : the multi - language vocab trainer
BTW : 0873928131



More information about the Catalyst mailing list