[Bast-commits] r8916 - ironman

idn at dev.catalyst.perl.org idn at dev.catalyst.perl.org
Sat Mar 6 20:16:32 GMT 2010


Author: idn
Date: 2010-03-06 20:16:32 +0000 (Sat, 06 Mar 2010)
New Revision: 8916

Modified:
   ironman/notes.pod
Log:
Latest updates to the documentation to break down deployment etc

Modified: ironman/notes.pod
===================================================================
--- ironman/notes.pod	2010-03-06 20:11:56 UTC (rev 8915)
+++ ironman/notes.pod	2010-03-06 20:16:32 UTC (rev 8916)
@@ -1,13 +1,66 @@
-=head1 Installation notes for Ironman archives
+=head1 IronMan archives for The Enlightened Perl Organisation.
 
-This document is IanNs brain dump of how the ironman archives fit together.
+=head2 Service outline.
 
-This service is hosted on kitty.scsys.co.uk.  Development and testing is
-performed under the ironboy user and home dir.  Live service is then deployed
-under the ironman user and home dir.
+(Taken from http://www.enlightenedperl.org/ironman.html)
 
-=head2 Deployment process
+In order to promote the Perl language, and encourage more people within the
+Perl community to promote the language to the world outside the Perl
+echo-chamber, Enlightened Perl is pleased to announce the Iron Man Perl
+Blogging Challenge.
 
+The rules are very simple: you blog, about Perl. Any aspect of Perl you like.
+Long, short, funny, serious, advanced, basic: it doesn't matter. It doesn't
+have to be in English, either, if that's not your native language. To stay in
+the Iron Man Challenge, you must maintain a rolling frequency of 4 posts every
+32 days, with no more than 10 days between posts.
+
+Your blogs will be aggregated on the Iron Man Planet at
+http://ironman.enlightenedperl.org/.
+
+Everybody starts off as Paper Man (or Woman). Manage four posts, week on week,
+and you get to be Stone Man. Get further and you get to be Bronze Man. Six
+months of straight posting qualifies you as Iron Man. And don't worry if you
+fail - there's no losing this competition, you just go back to Paper Man (and
+there'll be a consolation prize while you work your way back up). For each
+level there will be a corresponding badge for your blog.
+
+There will be a post of the month competition: a committee of good Perl writers
+and bloggers, who will look over the posts and pick one out to be honoured each
+month. The winner will get a limited edition T-shirt, and possibly other prizes
+from sponsors.
+
+Better still, the post of the month committee members are all going to be
+available to critique your posts, if you want them to. These people all know
+how hard it is to write well and they're all willing to try and help you write
+better if you're interested.
+
+On top of that, there's an extra incentive: Enlightened Perl board member Matt
+Trout, whose idea this was, will be be blogging as well. Obviously as the
+organizer, he's not eligible for a prize, but...
+
+If he misses a post once anyone has got to Iron Man, the people who have got
+there get to choose a talk title and a colour. And Matt will submit a talk of
+that title to YAPC::EU and YAPC::NA the next time there's a CFP, and he will
+give that talk with his hair dyed the colour that's been chosen. (If he can't
+find hair dye of that colour that's semi-permanent, he gets the option to spend
+the entire conference wearing clothes of that colour instead.)
+
+So... get blogging!
+
+Mail ironman at shadowcat.co.uk with your blog details. Start posting now.
+
+=head2 Logistics
+
+The IronMan service is currently hosted on kitty.scsys.co.uk.
+
+=head3 User accounts and deployment process.
+
+Development and testing is performed under the 'ironboy' user and home dir
+which is then visible at http://ironboy.enlightenedperl.org/.  Live service is
+then deployed under the 'ironman' user and home dir visible at
+http://ironman.enlightenedperl.org/
+
 Thus spake the mst:
 
     12:18 < mst> it's the dev account for ironman.
@@ -18,70 +71,45 @@
 
 Here endeth the lesson....
 
-=head2 Shell access
+=head3 Shell access
 
-Use the sush command to become the ironboy user via sudo:
+Once logged in to the system using ssh, the accounts are accessed via the sush
+command.  This is a wrapper script around sudo.
 
+Become the ironboy user via sudo:
+
     sush ironboy
 
-=head2 CPAN
+=head3 CPAN
 
-Ironboy and Ironman both make use of local::lib for their CPAN modules.
-Additional modules can be correctly installed within these paths simply by
-using CPAN as normal.
+The live and development deployments both make use of local::lib for their CPAN
+modules.  Additional modules can be correctly installed within these paths
+simply by using CPAN as normal.
 
 More info on the deeper magic at http://search.cpan.org/~apeiron/local-lib/lib/local/lib.pm
 
-=head1 Initial data import
 
-=head2 Data sources
 
-Create copies of the data so that we're not working on live files:
 
-    cd ~ironboy/ironman/
-    cp -r ~ironman/plagger/csv .
-    cp /var/www/ironman.enlightenedperl.org/plagger/subscriptions.db .
+=head1 Deployment to the development environment.
 
-=head2 Upgrade the database schema
+=head2 Check it out!
 
-Given the copy of the subscriptions.db file we have, we need to upgrade it to
-the latest schema version prior to being able to make full use of it.  This is
-another reason to do this on a copy and not the live data.
+Log in to the host and become the 'ironboy' user (assuming dev):
 
-Upgrade the copied subscriptions.db schema:
+    sush ironboy
 
-    cd ~ironboy/
-    perl -I ironman/plagger/lib/ \
-         ironman/IronMan-Web/script/upgrade-db-schema.pl \
-         --db_path=ironman/subscriptions.db
+Check out the required repository:
 
-This may generate a number of error messages.
+    svn co http://dev.catalyst.perl.org/repos/bast/ironman
 
-=head2 Importing the existing CSV data
+Run the web UI up as an independent service just to verify deps:
 
-Once the database schema has been upgraded, the next step is to perform the one
-off CSV import again utilising the test data copied from the live system.
-
-    cd ~ironboy/
     perl -I ironman/plagger/lib/ \
-         ironman/IronMan-Web/script/import_csv.pl \
-         --db_path=ironman/subscriptions.db \
-         --csv_path=ironman/csv
+         ironman/IronMan-Web/script/ironman_web_server.pl
 
-This will import the existing CSV data generated by plagger.  This import can
-be run multiple times against the same dataset with no ill effect.
+Install missing deps as required.
 
-=head2 Testing the web UI
-
-Change the DBI source in IronMan-Web/lib/IronMan/Web/Model/FeedDB.pm:
-
-    dbi:SQLite:/var/www/ironboy.enlightenedperl.org/ironman/subscriptions.db
-
-Run this up as an independent service just to verify deps:
-
-    perl -I ironman/plagger/lib/ \
-         ironman/IronMan-Web/script/ironman_web_server.pl
-
 =head2 Web UI deployment
 
 Create the socket and set permissions:
@@ -92,20 +120,12 @@
 
     ~ironman/treffpunkt/treffpunkt
 
-=head2 Additional deps
+=head2 Perlanet
 
-TryCatch::Error is required for the feed url downloading script.
+The following prerequisites may be tricky when installing Perlanet.
 
-	perl -I plagger/lib/ IronMan-Web/script/pull_urls.pl --db_path=subscriptions.db
+=head3 HTML::Tidy
 
-DateTime::Format::SQLite is required if using SQLite by the CSV Import script.
-
-FCGI::ProcManager is required for FastCGI
-
-=head1 Perlanet::IronMan
-
-=head2 HTML::Tidy
-
 HTML::Tidy needs a patch to pass tests:
 
 <snip>
@@ -130,7 +150,7 @@
          rc = ( tidyOptSetInt( tdoc, TidyWrapLen, 0 ) ? rc : -1 );
 </snip>
 
-=head2 Feed::Find
+=head3 Feed::Find
 
 Needs a patch to pass tests behind a proxy:
 
@@ -157,7 +177,7 @@
  @feeds = Feed::Find->find_in_html(\$res->content, BASE .  'anchors-only.html');
 </snip>
 
-=head2 URI::Fetch
+=head3 URI::Fetch
 
  URI::Fetch needs a patch to pass tests behind a proxy:
 
@@ -174,7 +194,7 @@
  
 </snip>
 
-=head2 Additional deps
+=head3 Additional deps
 
 You'll need the following additional deps for testing:
 
@@ -184,5 +204,78 @@
     LWP::UserAgent
     XML::OPML::SimpleGen
 
+=head2 Perlanet::IronMan
+
+Test the collection process by running the script:
+
+    cd ~/ironman/Perlanet-IronMan/
+    perl -Ilib -I../plagger/lib/ bin/ironman-collector.pl 
+
+This should go off and collect the feed data.  Once any obvious errors have
+been dealt with (such as missing modules) then create a crontab entry for the
+user in order to collect the feeds data:
+
+    # m h  dom mon dow   command
+    MAILTO="adminperson at example.com"
+
+    43 * * * * /var/www/ironboy.enlightenedperl.org/ironman/Perlanet-IronMan/bin/ironman-collector-cron-wrapper.sh
+
+
+
+
+=head1 Migration from legacy plagger installation to new codebase.
+
+Prior to commencing work on data migration, ensure that any test or production
+cron jobs are disabled to prevent data inconsistency on the target platform.
+
+=head2 Copying the existing data sources.
+
+Create copies of the data so that we're not working on live files:
+
+    cd ~ironboy/ironman/
+    cp -r ~ironman/plagger/csv .
+    cp /var/www/ironman.enlightenedperl.org/plagger/subscriptions.db .
+
+=head2 Upgrade the database schema
+
+Given the copy of the subscriptions.db file we have, we need to upgrade it to
+the latest schema version prior to being able to make full use of it.  This is
+another reason to do this on a copy and not the live data.
+
+Upgrade the copied subscriptions.db schema:
+
+    cd ~ironboy/
+    perl -I ironman/plagger/lib/ \
+         ironman/IronMan-Web/script/upgrade-db-schema.pl \
+         --db_path=ironman/subscriptions.db
+
+This may generate a number of error messages.
+
+=head2 Importing the existing CSV data
+
+Once the database schema has been upgraded, the next step is to perform the one
+off CSV import again utilising the data copied from the live system.
+
+    cd ~ironboy/
+    perl -I ironman/plagger/lib/ \
+         ironman/IronMan-Web/script/import_csv.pl \
+         --db_path=ironman/subscriptions.db \
+         --csv_path=ironman/csv
+
+This will import the existing CSV data generated by plagger.  This import can
+be run multiple times against the same dataset with no ill effect.
+
+=head2 Data updates.
+
+Once the data has been imported, we need to populate the missing urls for the
+feeds based on the <link> tag returned by the feed:
+
+    perl -I plagger/lib/ IronMan-Web/script/pull_urls.pl --db_path=subscriptions.db
+
+=head2 Resuming live collection.
+
+Don't forget to resume live collection of the data once the migration has been
+completed or things will quickly go out of date.
+
 =cut
 




More information about the Bast-commits mailing list