[Catalyst] [OT] what would constitute a sensible set of benchmarks?

Robert 'phaylon' Sedlacek rs at 474.at
Mon Jan 15 11:21:05 GMT 2007

Daniel McBrearty wrote:

> completely academic at the moment, but it would be interesting to see
> the benchmark comparison thing done properly. If it were, the way
> would be to specify a set of application functions, let people within
> the various projects implement them as they wish, then benchmark. I
> suppose ...
> so what would be a decent set of tests? I'll have a stab ...

I see your stab and raise by a punch.

> 1. no db, no templating. Just have the app respond to a uri for a
> random number n, and respond with the random number in a plain text
> doc.
> so /text_string/abcde would expect to get back the string "abcde" in a
> text doc
> this could measure the ability of the app to parse the uri, and process it.

I think this is a bit too simple. We should probably look at usual kinds
of URIs used in applications here.

  ...and probably more...

Also, there should be more than one action. I would say about 50 might
be a good measure, though my current app has a lot more of them...

> 2. same with templating. Now we could expect the string back in a
> simple html template ... although that doesn't expect the template
> system to do much work ... /html_string/xyz ...
> 3. db access, no templating. The db type, config, schema and dataset
> should be spec'd as part of the tests, to factor this out as far as
> possible. Then we could have several tests:
> - just retrieve a row and display results /db_retrieve
> - same with one or more joins required /db_join
> - write/update a row /db_write
> 4. a random mix of all the above.

Personally, I don't care about templating and ORM benchmarks, so I'll
skip here :)

# Robert 'phaylon' Sedlacek
# Perl 5/Catalyst Developer in Hamburg, Germany
{ EMail => ' rs at 474.at ', Web => ' http://474.at ' }

More information about the Catalyst mailing list